var/home/core/zuul-output/0000755000175000017500000000000015145144755014540 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145173145015477 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000372767315145173041020300 0ustar corecore!ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB lEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5-b6"οƼ>UWm׫Y_?|uݗ[y[L-V_pY_P-bXwûxwAۋt[~ _P^~&RY,yDy~z]fs,l<L& " d :o5J=nJw1f /%\xiƙQʀClxv< |N ?%5$) y5o? fۮ?tT)x[@Y[`VQYY0gr.W9{r&r%LӶ`zV=Too|@E1%]˜(O)X(6I;Ff"mSU2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺b}0w8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZ-D\h]Q!]Z8HGU=y&|'oZƧe7ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀svRڡc0SAA\c}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|@Kfځ9g ? j럚Sř>]uw`C}-{C):fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦS7PKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){ߘ>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/pCTSqQyN,QEFKBmw&X(q8e&щu##Ct9Btka7v Ө⸇N~AE6xd~?D ^`wC4na~Uc)(l fJw>]cNdusmUSTYh>Eeք DKiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$Swwu,4iL%8nFВFL2#h5+C:D6A@5D!p=T,ښVcX㯡`2\fIԖ{[R:+I:6&&{Ldrǒ*!;[tʡP=_RFZx[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9w#z A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%x/QOݸ}S^vv^2M!.xR0I(P 'fΑQ)ۢWP Pe>F=>l |fͨ3|'_iMcĚIdo阊;md^6%rd9#_v2:Y`&US tDkQ;>" ء:9_))wF|;~(XA PLjy*#etĨB$"xㄡʪMc~)j 1駭~բ>XiN .U轋RQ'Vt3,F3,#Y3,kJ3,LhVnKauomˠ_>2h-/ ђ(9Uq EmFjq1jX]DןR24d XrH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{Ba>EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> 4S*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧ"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) i&!t ڄ"7wg$}A@[A,9,{J[g7rz,K+.UrNPrYYr>M6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@t޹na4p9/B@Dvܫs;/f֚Znϻ-MHVuV_K2k*`cKxuBG&24T}Lai 0Va(7K#ӊ!,ZDxFQO*lם>!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d->'I`ػ֦fW\|9_"[K:$':[[H3u4Z] O$cdkCl===}6Iw yR%2dC䢙jSW%lWoW5(X4.<"}URQeR]iHc_)"{k=*Fʰ)zuެueϊ۳SyOyH"0o"[V Ld콵`_b7y@V3ľMT!Un) ok)c {c3쫺0ytU!Sh&Oi)?' s] >mpCst?025ep("7|$'ӌZRku E2$o18Ǧn:yiڗ{3t÷^O<*㨬ZPx `5<0MM#6]\~ }~|˲tOzm=V_>`#`}c8~d9O|#*t_^{1Sc,%",?f+K \Zd2^k"+k.%kK^͒V0ge,m0Ԉ4T""IT%^&*|1dt;B0yRzj(ewQ(d#=e#l#2зЎXp[7-v\n؆‹##f~2MeL6Cn%:O&r#"RPxTF,Iނ̫-|^=ia }LtiKִ7Ʉ&r^y" G\3,F+YUlRL 31 (QU=2Ƈ0WQo?Xƣ !F.{eTMjV]Uv`,qKr1 oNF,G(7-5ي:xS~&̳v+Uɸ}$MQJFqS7~^VG,MMкNBpS..|AVG_"}\شXG#|kƟ2\8AFwHdЈB,Bθ:Gl Yr x9G-Ks2Ov?1OvfׯѥqMq0BuOWs _s.JFSZx(.aVWĕ߭ƒPY֨8O>KE}  iI^%Y]6F Q 5dzJsLlDWЇBu79Zǩl{65$0w{_|24aòz&kY^6[n ' S/%uLXz6{G~BG|U"H۬wRNS@,4(.n ^4@k 3|B|O=s{u92O`0K zQD)NNzSF/k{jA fiz7:J@dIó55˓U8 !h,;RS !&! y߭wos%QbtiA /Y}ڕni8]hPڃH%'m&{_[b4 ɢ~-H wO4Uݦ9πb#Oc,0@|_6ELdӰj ğ`cE4dz&;BBuĹ@RhkTwͨ4c;ds^d,YψkTw!X9$Gο).rVT3YӠl&}{mK$1Hqې : zt5_ȳ[ߌ*M+D>~:HVPKSy-n$' \淙l* `՜F"7ZiYW{"<8J!RiK84/hEEaۣix,2`崄{&E) XnN'%52.Nb| JUhq$he4VYVW D4EIxxнz-ah&wEnoN=2][8Mܶ* 9[j^cu @lUC"lSPڌ=e5qۈ8@ٶvO TcYFB!n<ؐ5DOٶb9:aSVĊa;&*; rMH=gYևDݫ(v #'%KkId506_:eIF:ؔ3M)dytS3So>Ag74ʆjJ$D^8|C, L7c[KWLsYTcZ֌}ȧ/>^v'La9$QŷV[ю$=}wsMY-V]Kjv(eUV4?0TA *F sn[Ok'$TZD" o6 ܄ T_*ۤ< [4+HK+,8mmKU4ICtMx|gIDo4 .sDzކ-X"Z4LbP@]醄`vCb_9 t~S,bpv} b v 4 bqw@q7ͩ:GY~Mʺ3Lj((m9ob` ,旈%2O梀Ih_qtc(aḻх1]؍š$N?|Ʃ[ BNa8T* !@K ^Y9` ԧ`8*] hFwj\Bl\#J`DR0_L_^vj ޖKCMAޅ5Y4(֮5@GPn(Ζ)Z}CU]Bӿ: z׿A!'c (@òbn|ǁ$4A,4._ۚCquvT/ls Gv?{6 !`{ZHQ Ȥ iI3siȒhj[^NsHIJfکmI$wfm'^;2g~Yn,I%1y{"J|~7a*W8c壈6?%5T־\uF=uz4V>CwäStA"5[Sr*g/$_h$ao~|1C+qyQ 9&dG(mz~c)?"\cMJAڷi_ǻ~Mo3'ಜ ϓ==pt~}<;Ƌ% UV;xjQC ˟x ۮ.e `P Z "/OφmE0!I$ytvq Lhqڪ۸z.F[,f{` cm.A#k`-Z;@r6inA,w;PQͺt 4{k5[s* gamQgu3An8_Пv[cyݠ REL2e{ $ 3J!%wCo+ pxN<;,:X]ߵxG_ Dq8تg9@/Y$㶍pcvv){:aw/qj0-E]! g~DCmih.9HyoxʺP4N~:1p/_,?b+z=#,CGm%ƹK -[\?1ۼݺAZ@c‚] t x4NZ;݊7 |#|Aݹ>tBLI0#{ )"`>m`9iZLÄ&Ftm-tţFP@w#% &[E_ akډ4Bz>XvGbzcƖ tõL*vB/ls틮y 1AO:\Dxɂ[ eKc]a: 񞳖꒫ n߶p4ٶ`8mGZ/40h̰21ö/+!"Zh~p?µ:wUB'(l9µ;ʘuX[M~WHzq^xuΏHx6R`ihZm9, C3FC{C x:^:GCjSh)gA҂d^Pdټ o,X#7:;f3Mi7^̤#f ;--֥n T\4'<cpPٗnFHڟ%?\r={S zg.UhS ›7ͺ#TT尖'E?dU,Up)+BPbL8_xء= ,kbi1qb9GpqcX94II2`nxߡYejZZ3)L9"_lF,X7Qe902FXmCn!)7`n&h,9 Gˑ|z]O8>ug6I TH{3f/l;;/'ZAC#3$ & Xf +sev:q=ϊ`0`f mӟsof<7tʿ̾d$Ṇ́>X߷qd~^q"Q$ATZUe3ҕyC[}ވ U&}P ^@7n-0E˾nq6EB*-3S*H:t(Iq\UւQ&R'!Fq[c nPZL\o''ǯ@KG%Z1743shЏ0 qr3C^23eZxe"ޏg 2 Zf9ߜ _*@ZMօۄ0 :2[kf2mT.bcrqXdϕvaD_F6Է,gR"X}R%G$8ˏ_ ?`O^zn^O X5t.\聆貴dwr`ضp #sFwq|&99^FU S|iv~hЪ?-S<~O4£\LԄM9dpL7bUvSH  g1ev׫ʺj貐װ7`[ޑmVztG$dP(`3xT/uӣ/".cb (`YNް;Sv߶}rҺlUZlwY~'Jv^AP{wA(|UP|wA~' wYAPgwAu( 'DAVv]Po?A' * ?QPAŪbAOTlt)6Α&%\9fS" XYE=5ly6k]NK8FiՃMZd9ܦl1'UB1gDy$$W<)}7!_Fx3~;}əw>'cq_}\4?gм?siwtUne>N}Tyxi`OyQ$1~iCc[KJT5Au| ]['NuUM/VK%"0ފ .Y si`fĖKw O9PEAФxF`Ng|#)RQ( @Q2y"jA eޒi+-prsaB޲EJ@2ܐֲ"V?\?Ru XJ:rd.eQY",cURUݭuzy,l33fY,ZLzq -,fY(/]'$&W-۱NR7ОY\gc,{C*Ue! w%E:_BJE(]6Lbhh  ~Q DzI:=KqNz sG~Tߧ{HMh#BVw5|0=@*'=EȔ4E@w>HgP'g`IvVr8CNcWZ0 R5.O W}eeiI 3M^'&p\||UZ}, gu9yYxYEmUԋ3*E Leѵ-)')d&*`:8\xNbNpjdίWJS J}dJkamFXw\ʓr裸daw:0n0 %b.~ܢ(4Et@b:52%YUH㑚%uXH[kPsP4fH$4:S -wI /3 XVA]k,<Zfme]k8Gr7+eԬRY/ʩCPP:s>V! dw&:DU5+R]ɓXžIG-W b0^widHЃ0$ o9ىrS DC0gMԤ^q>/5zv&cH4}*xkU-F>4k+ [InntX*{Ȍ^(OͬXҾMuOU:c.V,Pt@GZq@q&~΢(]?QzTC(Bni]T,zNbLgp >m|/7V}q6˿*0j5Jw&ZhG] 3Of"fzY17ɦ=HQk)( rETZpbm茧ʷC5\E9oO™$mq X RУ j+c5Nk9m.UIR%Vhr{LA^ZΏl1OsԪqZ$l/TchʙjLBX\Z5%bjV_~EJFA1Eey4e)xY5ca @g"/T>LvU&IW&"Z>0kP=bNU<{\bRP5Hf.jbI$0H8#a$3*O;: VYgAF )(G ~Q&a,"eQS%>)UxŪ@}%vbQrs:zu@~ [靭N`A OA znXŪN0:UR_ksK +E}?&l 6A18{QBEry/Jnޅ-dr$ =l&qOʁ9[[wiRbCOF1$P,%8۩nue*ƛuyhްP'Q&1X폵O\ViqA*\+~ txӍ3/Õ7oI|HƋZ1cwC1 HF>6)w}b 4OKud.o|>[LWt]xZtYfy/kyS5WzBpSCCFZIQP \ 7 c'Sl2BT7n=t,a:7h2LS;Mw%mI(|V#sSO<1A@R毟,,N5VK W/_IEo5.u]St{(^e_i'DxTgĸw4>U~o{^z~mXG?i a{Y.:}I/?_?zݎ_?;?O~C1p܎a~NL&h'ik@ipU&EL?enB@vd7X沕YiϬ2KV 5 Øa{խ^d8.7$88iF(, UfdP#1U%ΏŎ'QKxg޺g2qbUHǁĐH^{ 5h+`G Uٖ < xX \\J;YvZ= @ܘ膀\?cՒYqj&]X&ڲqIX*&)RG^d%5o4j,KSs˒Fmub,gpzF$a-z SŠ1(mXD{Et.J 8kXޚAPeA v$pemp<]<[wLAd᳔A;Kg}|!Z[ ^/8L \٤2d81.p|ݚu&Q(<8l%3]й=KRb-Ƙ!Wb'bi)~&I:(56fIB ?. ްr*%JZI-X= *q8ۈO|eX#Rs5)nj x5,9Y 0ZmXVkZBle116愶1fz$.vZ4JN8\,!pݱo+U\rPrFK0ŀDf=ydT # 8_Go ʉ )(SjVm"W$)!̐$ډ8d`pҥ܃Y4UX[6a0 {ՌڑH} ,*=eR2`c^vbX 3pkm1HNtTkk6FfrqAUrϙuQ'WKU8@ 5+iF tZ=H5 3%OhģxP=.ykK- ΊNt0Ҽ{U *~_ y/< MHmDي-ۤ=^c,Md۾ׇ@7= m˔-,tVL(2Ʊc0~;vՊc`>,/#'in}"׳$Pvv"rpK`=BUC;Hp\"]#]B)߽h9ڋ<0Gk#+=8`҃;O7]z߯!b3A2/eAR1V3X!p?ЛנLU1h+jf "=K%r,)%g5J2rbFI]G־8SUJR[,,Xl<Ec̰Ji:B36n;jf], Q3Nт~޴}id_Eց4Z1Y%dJ{W a1ѸҴ(=?76W:ܜN]oZ  d [@CJZVL4 {!U_W!nV):rԼƍ3~ ݺspsp]Uӊ[0tl$ ׽=J7#d&oRIoӽ}f>wI̴y "kՏț%ވ ;+7=C¸ j|#qrkf'8bf/J[$8nQvd89GggEe wj4|Nͫ8A . "U&jdR΍U|`ّt^qH>7|mr㘚L$cΆY=aQܧkd"bM'0)3}T-bhnBȞWE&?]Kh%;-9Xfy@prH#R+<Tk9y.Hpͅmqai@Y=Ek *k0 cfJ,V[3ݔvwck!fg/uXUGRn v>V;YApH%/!;u^It& VWy`ɤ42e-x ?Z?|^-|/ń1eR5u.Y CFD,EU(I>+E!I nď;_ŖD`x tՂy} lIpqL펌s <24{g7$=ueGL{'Q ik>v}'TTpc3bLg*߲?O߬^M !AY<]c,w-"HCdZc/&F$8{V0+A ⷐ|^ekv$^H96/9"KHxvpcl&RkcB5kIp׫D+vquۥ\ۡ0&iR8-}#aZ΋sz fEbT5]R%pkɑWp@~Rw: ?`嗟>">d g4[yeW"Z-KEh& [wU[󃟓ӦIp,R)ϲ g!N Nkfe .o1X~ZtKrْ8Z6YsuE{(j1xöj!բ7I<`.X ZjL&5>f:!-̼Ogō~2kNgո]<>D; {|V{r'\*$)jZyS4a8~3$<}-Ǽ'TrR@&>7+|Fqgc,n$8Z9}S^/9~I,6{mx\ ar{~Y.O$j2-Fh&h䬦5򶉏˺Y5B7TN+of mlC;ȏOuQsyS ߬n\_AK xVdž\L[8utͼr6%L"=IU@^"~m4憪niȟ7Ⓥ%XLu&BLPcIvdRH&D}<΃G䣻S,6C2yߒ//7ŕ(+|2zb>2q? &Lfl|h?4ʕ.͌:q>DMbN>*p\v|IMr;).9OL?C[L gףM-),~>;_OJ=wVCZxx~Y"q>F!ߜ@,9 GC_A@Β;$Ѵ^sr9?ľ Yfx g0 )D}.ZB#?UA( GkdYi2 d4#n/D=YtEKjp5//N>g lS@w['ҕR~\h4ޡwzy3ױ߳8?6tx5 fc29+l\/W̒+/arg&?!W fW7.WO%.L~a/T=Pm*|I6ʼI0ʛ5\top tO~;Wx?9N=\ʾ`/'{eߟI w:r2OOQ8'̎z=yx?Z!PQ9V )nInbh>;hZ2/du%{*kxļr#Z,Hov`,ܼs|*7x0 Q5JKqщnqN/ݯ>|W< NQO e&? P)&Eg4w,xYcttlGm.=F$wR"V{uYCK+5.\`|77ZA{S{}+!O7ZGA`jD$".' 껫ͮQquH64G(Ub%i˥| Z4B5*6P}]7|Ձ ϶}0Ŷq1x 9" "+Ps㥯EFp99/'Za[.?aAVRA{2.{s hL~~٥j6ԡxCܳ5bOh-#ܔGHr?ˣ~/;&'3yGH(WXY QܜإdD@8:< ;f7Y"5…zd_vK}[(k 7T;a^cU5={oFqYkࣴu DCh9^uFš5gMߺaCADyߩ >`timQ\x\o:Ȭ' oʛsDcnR\1Z*:Ќ,6S?^Z/NB6t@(?64öqn7P>Vo;)s;^3ޭkw.Iec&ƕyA(JlTPzgl)GXH#AxĪmu;u;K}wqJTG;o Tu44XRr,%·2ARL\@ 3H D J%tkQ-5gь fTR!GNRń 65;P, )R"4PEI=31k[ͨQ-ͨy 5cSk.bXT1(6kAydLC btJ)M GakXzvfk5͐Vx[6lWcΉΐ&M$-ID'ڔ!p@/ !%aAԁI6 Q&gqI%4F}30i?%iGhvAvN9B_-~STǫC?"*פ+":kPj. d5w+/ =>x@͵$`9r߅_sz 6VFKz~4 Za^E/#kQeʙ{v]Y^\y8`g|C:Mmt|xqMBS mD9'}=^#&`M2>p.oq+{G< T Łqo f/+9^-)=n|Y 8+jrl EKXx> /7&E\ql. f}\DZ 3G?7g;Z/'8|72){YD*v 6o|ۉ ."9󡨞o 7uw?kʈ{kmo_ݼx-$'|PmZdRRAsQ*A{-4&&-WWܔsJ-kfwr9T`]iqz2{۳yLJ;  }GX[71INbyĝ%&cmMoWgvXcoM$\[QE:+JFCx.x9sԃXAB̍[a\yhUO~&V,[>zz3E N$%ap ~f{^tbi1THdt}[:6DfxD͕-HLꔪ|\_6?} q{\ld4neARȆ@><ߝn7!0(R{EQ yЯѢ J)5h2MRY::q<>F9l%Q?kAٷ; 3ӸE>3̩j->3ZWmLap920U ^) S㔢{BCpH T(-͍taAЂKܦlA O|B(Z||v {(DaT p"(J_xf,3$•N (x`:HkFåG!g4ՂbՎ CyYPJ\i`i@pS"&KAgzUZ2˵g=iM@K69b*;hH-U.!xVuHse=R^Pd' ت=}ҝqj~^B>!`jElJ`L$V[ PF[RK0qGrHV#(;i "wx$x|I($\ѷ: Tq5Rs4vAy Vh9E B,5MBh'Xbr?VvRZ*+gM@-`T *mibjڲ@Wltח9}}YpՖsQ9x}s'Rnjb Od d=XQ Z]l>ު0͡Mj[D4ʪp~'It:~i,0m"ǭyٚ$(B 6vO˅bBw::<0 q<)ũfCȍ44E\sc sԄOvçd(EHJ-(D3PP*0d,S˱=U!Ezn$J}`hʍ&2%F)s1U3'TB'[p- \HyO6c(PhkDh9#gY_0A_Ht;$4]ZjwvXcW&.ͅݑViaD51qJ DH,q VB2BXQEq,E6PM POJ* '7^iALiĔ>mT4W. 6R;-5@^[dbw>E6x,$ݢ ` \gjH!L*54rpfٻ޶&UhJi_sW$)m$)$uEE3$JbQLucofgfgvgg7$M5c>+i[ D4@@Ȟʡr\)K#seIt]m\=@5@# fgZoیVzVC4fh͙I"A3rdg& Zs,h`nQ`+#6,|u%YO/ln``r³%EAjph3ҹf>/?Zµ\1 შaQMQ^ %ٍIpJSs»e)5ۣT 2ـnEgt2!`x6׊$.aD )8+-J(]Ďr**"ܽfb\"ycQF fq0G:d]dmĠr5TLDd*.FH$j%<$FcbET%:Q)3LI]KV3R@e2j˴u\JP:Z;38 ׉9|Hc5L~АU] & ` cs 1g ngA ~N ;> ZiI]H#p n<ʜnf%c€F[6b; ",8R2E\̆ s-7D mD⡵vhL6R roH3V)ΰ#LnT^D3Vu:0zR듁mQouw[jaAcvH+_LC :6L.U 3'/x`*1 K}6p5=i0% Js[i;=SQLG)W`Zmj z\FМ8 I÷mNSɚT#IM`JN``N˻-nh[Hc#Dq-ۍܑn H:c6$UT6liWur;ZAQgS럠8}FgW?-'c0-8xbnp#_;N4xE/;Au^Mbv}wU'c A؝k1k6I=cɑC՜6Xegs/M`2uV Vl#5G#ZC3$ɱܓjV z>5FJ;<J -{M6%wűA5%g@G0fl~"GQz,u/ X >>;r  h8_Y+3S{WU\x+M./˧a6 ,{ڢ a|CG4kϚrܨDsi^x_把43ɐSIF2'wDA B+fi9u8^ݻT.*F8ڳTs@V\7)inSk]4lLe̛t;xxx"0vJ P7lm\͎? S5H7mzJpy[O;&'\6V{j+|$3vh i ʮYzk p1TM/-.q8i͙iR *^k/I-2dLѧr42Y2բMV\go_nfxmQj_S|WgT`|-:==ce#ps˩: q&eU):nC 4WO/Qއ}b)t_s?=>A-6צkӃżʤlΪjC״n;NG ;C3Fhm-ٻH@wsFwݺfZsIf0jS>@Gp]?/>&)90nn[ W ޏKxQB(Md C-Dlrw͊8O8c8<*| 7Sph\ m6}Z_ :t裄 ;J16HBֵ|AO\viMVL=ޫ,d[KF&%W~|?`;lzu6^X~6JG  ΋ƣ?@8Z5 1ղs o_a:Bd4eږ&e|oqPW1H#ʎ9oQw!!heS.818Y0%Q؄OHSxDؗ^EetN^qڢ$O80[̲ҿGCt| 88RڥavQ́(cQۺe=ȲC49vQrNnR*Unހz+-$8<71TJd|aܒqTGi&ܩt]ZÛ21gQ4~7/Ӯe[?+}E/p%8nQLCt>uh*TY$>,Ņ`#55c#UVgeSQXZv}3v|f+WЅEZ}q?ZD0sNe@К2{«`&| ^gbʚ-VRZ.5$Z3[_n ${|X. P. rKD)%n.AR`/\hBDjnImJ}(`oOIcmar0I?vբe_t@;C[T< 0k`w]Rc\}I}f?'?ոbi=ų=SQ|W9VCSWx38z[{yyvQmPZݵMr%0e=\dr]GOڴ<1r{KW o W޾|0K&ᢚhJPzub}kURV|OY>JA :%_of3LyD/(,)ns.^Gӫ"Ȇg|&EbZ3`\j-t׏)W=+\TpS$t>I_U((s{׾L<@|"Mþ}]l"jȽD*Q?y/ |j!"`޵?<k H󥛆%oxG<ƫmoƷJ"@h0O'_E&Knz,²ZDhd,0҆gu3Y^vPB$"762j3Ph셙ʂ"q)* zYj c9O&#s/FEp\^@aJTy6Ւ&qc$ "LatH^˾&tMb++}S+`8U!\F4aBb .d-5-~@ *(lB-au\zpuZ&fHaH跅QC'h"!6ela(,N3mWQJ- 9)#jCJxѿn'_h`r UQmrVAܳ=ϲiD$jfPȇχ}9 t܎\]mZPϲ8ؾ93wQ\>{~Kp|Ȗ}`Oho41e`xGKz 3*%9Tp=tSR1yU+^+-@5;zD0 Xɭiu 趒gi:Ff)kՃ[71~:5oş>|X)ÿ(#͘P"ؙ(NϜs}?//ok_O&;|jzw Rセnl /mnl᫏ye;= G]M O7X|f.Tu_l̥_1rx6qE%<}\]X656\g|鄒;c-c>B2,CӷTܕ5eRwϒVHE2WjhEPN KbkSELD$CR+cSǢ z<p7[*58טYAym^_\wv˓x;]ZdqXcq` PBozKkLv//lܘ?Y!.SU~fx0ehA WYO@U+2$Oٜd[~<M<7?M:~a3ZtӫH_FI2Mw /0#bWs0%O3?A+TB1!(b_茯$ ᷗ>~eA^n?t`Xsa=':EO_ytQN~Um޵Hn#|?|§]`g`gwUjRR%e%{fW" 2yv+6lFA),qs$$Z9,dtXm u7s#W o5c[OL_?Ɉ.᭵2 ״0ijGtrtwQT ( C5){ *o!R1:c^*e4&}bk&8`9w; uM btXs)h@ \y8g c 7h=cOGdY,OW7dU;i@M3 wOа g eK"Tx*UH"+,9tp &,`#P˘` MRH(y(*[3j]ehdw6$l8}e{v<HMRJIVSi(qf$ۯB`>yO`3 ZBmfj8K;Otu*c) Z%k1]1eWh[WQrޔ>Eab2N2A)ihZ˭16*yR4z X2Z컯:NQI_ĖOMur xOVܮ -(<B[ՙ݅àӺ:[j MjFlGwρz5q ~;^{kW<13 avBujC]:m\JO?⩊X|,/Hc!ǨE`E#;s C~.QP䏀uIW1@BUNkjH[ƨP\0*6 q<hXO߱LLq GnL~rv yd?累/ĭ_]ڇ=T͇_?)$WޗR'$(@(&ipϿ_9|UQ>Blu3S-vo#ܴv?t?#%VPOl3NAkii "'CWd}wG^}g#3~LR~ڪ" 1`1:1񼇢*y(OѨ£rL3΋Upcb Փ4b$2CH$Ąd4*F>WF\h*r4]AUUkV7 ΀-=n lia"$R ]Vuo]ȧjm{.ay1i*Grv̄ Ʉ@|B>hF>Sb<=qJpД ]G2#nQG%ʣ!<:*U[Jh`(D"y+K090!v #]!xnjC+}qM -sJQx_/0~hzu@L__^S-%4 H}ÈO9?0p(Wk/hwv%Sq,:=[>X"(jzkXTAB]"|0QFJ"$$ {+!ƫ4 , }U YjFC%np'@wB;몦o t xBTH/ uwߜNv% ED>4Tw{9,}/W>SSxi{mU`TJ杺HQ9V=W )I)-T2j鋴TU?/qUM֛ "lIϙٟ {fb΁V%&kp |@3ri^k4 qL3 Wgvi_ E"Y `ѝ ' 4> c*_0qm3[$^..y_Bq2$v6"I[RLJ!&*̡S֍!v #`x_^>19̝U l8f(q*Qg]I$I.i}`g*[QYlQ|si-r($g9 XHÉ]@@sU[_Kvu^~P@-R$U%gۭeMYoQEgj[ c$$!V{RQeAhH}H't 5 K} 5شra^rhƏy-$ː$#US&AXBܢ! !L:g/OUo0 t~e;XK{bw-)ˍOpyV\o/ P "5,ZcR$V64 Q9GG0*' .w}ǯ#^ eXRX &4;tƻ} @ D>'rhFSp:!B4 +Rkr%C>V&[[b3r/hfޜ e1&tu#F|޽dցo~d3,8ɰT9Ϡ@<΁/xʉ儧lz˝xK (>譩ByBbAo=UN3Y/6 {\Xh,񠿭rH5/MVSZD uz1` Oq!1 Pidc22^iįc9U*gl;&Var;pm/:$Н;oGltxl)wiEZ2Q4n2S|r~`=኶{~yT!,nMipQFwyV_}ՠDQ{۟y}uR#JYޫfIAγkAF<D>W gL|@ƖtQ{|A;P?-?Ơ*}NpfFސi\-{4s{~u!g^r1IZq=cD\uJdh4ȳ1|1ժ拞mT]vqg'r\:Ud߽^n8*j]Τz F7n0p3%)^;Pwkլ;)˓BPOM4;6C XPZ脧6<#+^}L*c,:=%k>E9+XyZ@/fsf!vp?BQ\0+YpjI3BZS/c?UR)߳Eaъ`Q֢ bEacѢ[gB]{ەJ ;5IKӫ 者nynbp0Qy c>jmEI:Z-eq{;iHcLKwvV7>xNHyNHbVWcN-/x !ffDA^H5uaDFkEe F:{AU$ƴyn\ b[c X^Hlњx?U,JuA^lњyG( !Z*eډeǴ9>:#xmtQ [i$zư 1'<*lNŗw>MT!ݢ|Ġ}׶>N+y] hK$A9bVX"ƒT{-ZSOY*\)]D>+x/Ŧ=%B )Ƶ+Gt1^٭AC4LxF QpuG<vJMqF܇;r@>&Hf&wDiݱ=/+ʶ5t@#c[U}-iy1b!~ ZW@4^ !hb_^^m06Z}'mmqxF{|F!nG[~\CNVsӅtTt_n¿gqHNQ<ܐ98ãryOO]WΗq>e`0rbU|`˒*pS`%@M y~ʮ}z`~f"[vE_gC]]o"ʗD,&C&/՚oV ETÝ>qns"9)X8^0J .+t56]v,x]^A¨1BFԤ^ދs1./t#T6-;AEA[acÑ\Om1"cŽӆ'So7Kh13&Bp2`]?ݢj5o{j5Y>5,*5˭,Δ} 7y_Egip<4Nh9@_Ԃ >9^+\p."'`P"#n@X!E$L(Zi.vzm#D43)35&eoEW~Y>`P%:H{]ҞBa/ް<6M:'fySȘ< 2uP/[paZ7Q=l鴠m( eyVq%.0MmzuX# ޡ*=EtE6 /O):VbF6=^vͩ.T ҃*J4 p4d47.{U|]mnw4@&pj 0m 4 2sn 9^.93=:yѦ<"R._lmr,19cbt]>,\(;?1Þ>(iSJiY* yI">؀5auM%,ZfiGȨ":۸8^9U'2cEQ`l- iQoͭEWVY#Z̋y:e>`LOfF̰ٞ\=mKj1xj8ؐx&gLqe`"Z̖>/:haD>"ZAU B (e9'|bsJ+eNI&a3qrƽ>Lf>bחM PcO8zͅRqF {*Ut>'C/ʾh1e[C*6yJ\:LYRI,!<2K^盧Qf(EfCBck/r q}Un3Q S<Հ;GfN9YHF֙ӨGUHr,ȴs Դ 6UBYs]+hݜ(j鬷y"c/FD+g'+ fʹ29mKBg2:pNc/ |Fȩ3=P/ |օ1>lFFOdsŏ1r5aja,FIC^:p甇((B1 !n.CNU:slf9N~c&EcbȦ1">mwɇr7FYf~ltḶ16aP Wb.`3gB' ]7BPRiah=dq!0ĭW/1֩JwۗZH<(: c(LUP vgFM;Mf:wE{9DI)|JD3$M4-x!'ikԳ3HSQ'z#: xzJf%jw 2Q]a:s6Y\Lw tr6`,CA2ejts7>F-W~ףJ7_vl/oc f|PNZoiJ97Ђ35WSE)J70YWms|׫g05@)ҏSuVXךjvonzֳe\(h |9ކbVlW@9AFdcKuF]i,{""6L WP@l1Br H/or)3p_ \䩖6%?QryO)&TyZ`z GeN>+x.,` J[. BgF͂I(~JȬ AԊ.;Bec-no)Au0q.Cuc>+^!^qTӫ>@k̢)mLH '\-x,J jc{݄ݎd6 +/{|Gb[Z`]ͻ=C\Ir%7N߼7G| 7܇nOG 3 L և-)y!ynB~c|/"ڇu<+TTaSk߮[7kr%0tH*m_%H qK/l?rS}a~ds |8̛/[lBeg-jW)jv\G~։K hY-/om?:*hܟKs/o~洩k=qt@%ADZdL8>MM煢'$CEfri-ӴY ESـ/\2U[g y0~D靶TSR:?X[M`،˼4ťv(~z7і"%إpfJn`c6!]&6֚bXNHe=w&·eeQtA;e~P2qC8J՝_׾CY"c{PtJviPނ\FrΨ+ea[BLӆvq=+)K6WЊsiEL3KF#$j)YFZ2j#x4U"rctgIk$\Jr+KƸ3%o;L>Gt* %al:GsTDF ce`ψgy(?G8,4 BÑ\hPJG2J^S.!a8+E<=D8$i& f< a5f>̷&aBq9j, %a P/.> >`!0U7ȅ6ev;*:cD%MY l/:-EPs0m9&v,kt#P:9Y2WW@<05|V@UNm S$d,MMEnE.8 "%-2Rfѭ: "r Oi^UAqRKzX,`®]P%$ B&!yoӄUS~d9E,,,\=䗛W,!!FU7FtX#AvM ~< 4qX+ϸO8/T8nG<^tr,mbU"/Y3{!Z78yυ#ECl^`> Jtlj]];)Xb2G?+Fn%i9`^F%FUfSˡ|5+K~Q?m鐞`T&n&6HOUL+]̖>g_ߛ"7MQ=Ymx!gGZdԏյ:HNViɦ),@0]8+™%SӴp=zbU;ek8b.]tx΀l1Lim|^ԄI;`}IK~g(MU&9)~|!fP /uHDEƄ*i_?~*E4B"C>~nr4/ _<08m2!EF$e[P1\iNs-h2c C]7QM96܅q؊m:FlQ? V|:s*-I<5aS7+}B>sTLc1ƃ4ɸ{SLvuQO|GT zT5 k`zdmC8rv2y.qinTt+Hi^ mt5X u׍Q,Ta2 UD1l/acY{G+q!1`qld]MvX4"mL~դ$EM2ԬjVu׫hnE|qx~6sv{T ŔhV E<خ~.qyGaB[o*}saҬH~P/jY8Ɵ̸aRjD6#9\iQkQܥ3'pR&{hn?bAoZ/|!8iFtZzHƨ%2M߼Ű,G՗OGz=$\dHGw/d2mݎXs w-D-pKC8*ۯЄl~WU64R|I(.*h{Mփf y \PULr ,VDP&'BTm_EqAr*ֳxfd22ʙ-o o15#&li' mnMU2(F%|6-a Xt9|QHَC4=7z&4zaf;#j|]nFoR5vظu3Kĭ#xVy<)+)\›#Se֊x2jc .=G?B|cM^'_s{ N㋉Wl##Ul&12i/E5\e:gp gF86:fx)CL?7`ގɤU^͸.?1f9TdEǏp_a\HyNLI|+*9טgDW*Uد8Qh5b%:iC >KcMTPZ9ܯRYF2+VGfTd!}R'k7|c{QxeTtz `v|5j7ubJlNC#ጇW0?2(0>~H~M[|OmWhU$M V^uPie=%%%8ɵd Vo}~G*=$}hnE/ |kxH?f= 2 a,[P3ƺ[rH㼃I|rKcȰ6ƳD|Kj盋M{+6Wn Ԃ|K@W1-|9޿LP-2P_DJFTFU^$;b5"hQ VyF`%O_2}$Y.( ĊP `*RnMM_{G_<̥ j!% HC֠ R̂}l-ae Pea5l?@IÊT,/N%aʀ>TD$EtLI'Χ|y;C1ҁ#2q^dGSATtQUC~-L*#l[A5]8|m+$ B$! ʹɛ' K |keW\A=?2Qdz-?sW3bg p%Z1Mrex=1'inS&Ԑ("FIP r.FԁM˃.6`s (Gd/wG6tΌj.PnG] |Br8TTy.en ;RB5*9Φ]5Ap3$ 6)xBgltH nd"ys2پ$?"SFoY*jށ#2qnp^dyU( ī +5ڬ[ߪ"nqcM凕|TQz[Sj)(ґCϏ*Q)†KuUGdp} vL `9QFL\J2fx`ɢz(T}>Pʶ([F$ &uho20k9x9`QH,2Jn?qxdrP3`~`RG&걏E՝bEm(UN3;?-F)2&G 1*iۅC̽ Hbxj.G@x=Y-^}/{U?*2 \ZW>HJ%ahGIUQ=rZH᝾4tHGdkc=NQ5/ %-xpdJpCi4BT$x e\J 9osB쨗{3b Is鼬lV("ˇٌ4oƔzId qOZ9G1gܸPe1;p%MG OTF~ġ xJld5oΆ_UE8?ĥ;pD&VB|Z8¬3@5L1*M O3 8uL%pO>'iFoN`c}f2%'[(N+Z` 77]MTMŤ_%dZN-V"k]cKK$MY(i2ήzW]Н\NB5 @p0h{_q ~LRbC}Ts:7lOE`S1F 7Tȼ\$ ~fYkZt?_ mFoXc%f0 mq F(*NOfQӸG4K b ׂ2usCT zce`ɟ7{Óxbaz?&֎T/'0iڟ൩$ %; w P@ .o7N M߁B3[V?켬yz?o~J7Aj>oM?Wz9B>v&X!Ֆ} R QVv4TT`?+?z"_G gyk}Y˴N<&1`Woyfv̪:=-a##䨞DGǽ(E:`{3a * \]ބJ:=K*|d M3N8ufxԪLӒO]w8"'B97K8C0)>+rF0z,l+?/; x Xu!M6ԸRQIlno+,8k,-!T1kR01ki}rc 6W Qa)cKSarLtxcx~ i:pD&'z{ùS`*HNzq JG28"ˋv3ȅbex2V6@%Ǖ#C3;_-g62YHd2w,Jk,DdR .dR2tRu S֭W;Tqs~@)e|-1Ed376s,IaF}yD6x9ʬ9සM/Bм ^"eI.פGd OCu&p8"ѥF(RC&4|>q`{OX[ <+:K)\+7ZctPK3fh?2Q÷9֕z9F8"g(pD@>l:PxG' Х9ND"H)9 iJ/r1$Iry:pD&ADZDR~,=ű d*/KEhb2U;0.+obՃGdh1jzd})bxD%eZZ0@ pĄMnGaItDۻ&Gdọ)FzIUJ,d %umg嶰lN^ Omtaol+*)Ѕ$2y'{tt48<B-Q-;bEZR8uG&:_7vX/R F&>+Wlf['>^]y3 eBiaC'Ќ0)R$̑=)1@xq8xN诏#>>XO<$ %>pLfrM i.i:pD&׃gA>V|# ~>/ed^Y#SD Xz\x.nq^aAuvVg%:L!'Bx" [ y 6(q T`]"DS0F \ÃQ=Sy%asTypcQLj'-D$c=Gof6ٳZF "L#NXD.E6*E~ɡq*Etn cL:bک L[QlR Ii4e8ċx6={zi Fx BY(@\+hU5nd {}+ H>{KW7QS-$h0I@YHo C{L8΋ܢo.1.~lfS̮m1Mv:;_'uwQgUEMߧ藉~,x0>v 9RN3iTg""CHm8ȟ{7S挶>gb7kQm$QHve9C! 'F zj#*͖mMYlöP&i'gS I(.i;U%snJ7 ] En}SVD2V뜿+P[$\ @j9`?dl_:*)q~L;U2ʝQX&!ITQ$U2ƶ {IH=X?Nʄ ڴ$L}9z{3:0ox:fy3繟/*GgW"'^t'aG/tfz]'}xcQ>dns7d2_5Rgm΁`1+s>zOc`ԜV }yTٿV$Uc<$0YeS+tz`K˪ֿ?%;>"Lzl}< P0E[b!33?S3s=2S]#P  =+}1:7>0Ϻz43zm3J`PS) dr(og &29^UeN6o&wK`pdZ5q9LPxP"p@F !4PnʮoJ=PA)|bDzRp Jk8RrJؙ.olS7%^u,M\7XO7\e$aΈOQ_>Ku7vn:ٿ~\ٺ_TOnǡ#ճ8㣱C:pNz~*\q$^~<`qF?Wob=cOL_~cP(_n S MM>܎pg3O9$vu'_hbWs;]T_o*W&g~~Hvt lV׍F˷湋5;2zlhMOeMC(oR8'?U]&">~dNA^<eSxoc`°jED/v29QM) \J8r@}x%]P'h@Jw滦.jm1}7P9s}vmmw'Hq+a_}vG:h' 2zՃ>ش\x*hˀau'WsKó]s[˟0=}?no7EQ{ j~;]ݓ ٬TjOu3_Վ/w{xP|sj'phab^lIig9<|^OHb˦`9Q]Au?]$x;$6-h,S@o/eǔC yrG>\TtF0=?B*aB5Ik:E@hu|/Ћsa.>Ň0\|3#gh>rh\0 ޕZvI̽Jzn?0Mx'tEZ"ב{lΠ:X@)aX32N7r3nݜD6i;WS9Pasj<,]=Z=xM [ V>'jkTEW]Ef|G|Q/k#ܧR"QN9Np.:8cPg)c{{g%9x_+ f ix{;Vl_fڷBLYvB=R;:=qɉڵ/Xvq&{R>#K#q2 񘻠 6Ks*1Ab ޡ>ipDH v 6p=}vnَuNۋ+vi9)(!@?}f7|ͳ>\ܱ??Buq r:kCPhoRYLs\7bB+53kK] U%/j;@wxNcNft'Bl)XsiF؏f̀՟!?ݐK%9QR@B5 P 5^€K֙d{AnI zM b T:yNfF9@'@4ust3{oVmͷ@r`fWaϽYWy&co`Vvsqjm>;o!wQuwɊP Tx*UH"+,9tpCI)i#% ms eLt*+X BS1J1LEqYmaHEA2$XV:A.H N}&)oT$+T}*M. 3U@Y I0QMuµX-üD' QQ2kv1`pT6X@uF+@,,Wc;?[Df,8-bV&=at-eh@ z56g$c8\ӖnVω<'[aY;vnxg|jm&w<׼l+ޭDSӉUUE#vGkݏA 5/j|Tv{SK:W,L' SrVaW~nOR֘>B"wP 㚚GX"eU"iɢC&痄?Y#N.JŔz lT* D\^ppeT}(Q_/,6ӛq(*/yyjpn(87_rv]GPs"%>*U~WÊN$&`q## &>&EIL&A)Ύe bB}:6@<$D6r*p ƤzױSZ %0`8d(@`\g Bp5W2 + ;ح*JVngұIWE{`(<x_ 5AL*O"׭a#W͉E+cH$]Ipp"Bb%:grH9\OP+ ̌(Z CBB996 53qTQe&-/*@02@524zK8zyXm86:*.#'ցC(a]=G1{ çV27/qovn_ ]7-NV΅P5ԻܞHc@"WzHȢG~1CL-v>\fvENg=GɁI]X6CQE($;V%mTM8q' zBյ>%h%9)7y ?B2B H(FrD  | P!qnQ} ˉu,{t C !4pz|PPZ!0t_)WYRR\ިJg &PAǽR`|>8qi@ wqB| +0P( 6q 8A·#qilb`FPpF 2N+jT3#j eP@=V}xUJt :ӋŒb_o-,ێ0(ѸQMwO$[һ^(=&ShsEn_/ 3 ?DM'W2KbLӨrMoJy~{:|aB Vώ{;Srųv x,eG FҚx ٤DQ,da`(BCCo/]W?z_Րh3*Q8vQTg2Ei@TP"2C̀QX%-.z8§]`i Qtg,"KeB wfѳ?/o" JdV\]d(u0($kyَ%Sh]Q9TԠ,ANZC0!徘:yF$lP^  ֝OcK'^Wu\y-}$j-T4>F7el-썦@!R,8b{jrjThUo=drV,OQ =9ގ$y6p,R Et/Dw%2Y z^>Zu߽qc "HeԠJPVЦ,k,btltrgJg^=KmA[ `Pmicl ۢ168bClcm Ռqϰbbk !bC>!ƻڐN]mw6ƻT6f*Kb*|LW{6ZgoC վZ`oZ5q씆x€6&eElS~2;tBIU.(;(3Qc(_Gy[\r| 2FHLH e(jha.Vk$$zS4FU|*-h_89;-w7IwO,X޿p ?S%8*k)i@O{GZpxQeWװFWv~6ZPEl߿~Ff];ղbF}w~P/sw5=]M}w==w[YEHQ兄l x@+I†bD0ڢ,HBяu4sw8ꙭFcW+Sot8Z-T# by~ViW*΢?"yuMo>#]̦tpxryY6xbI #K`MĮH#c2:'zU  >̐/)ΠL8k=جYgb.E]TAXS%7/i%dN޽&:u'Afd9z0+|s"16yV6~%Ӳ=Kg@HÐKWwCIj|] o[_rYbRoE/~}mn;2xl 5rFO#Wh \+#ThE#Wh+4rB#Wh x5rtB#Wh \+4rڍ\+4S^C("ynK>ksdzr a{(Dra+ڊ>` lVi,(mba!P3%l>!7-"7ZP68[֓R|DIl \4:`0tǡ"YՆQ+TeR;UzBѫ=u'NbTuQ|P1OJi\Kٺ-TT: $^S2Msn 4 h ?$AQʁ & !<ae!P.[(YJZVv"^]ց,Zb=MI5≜6j`=;Գ^xj .A*b@Q壭ʃ- D,RD V @6R~}-6ӟYMštH6k:)ȓ^\iUD2ْ%ؠ`P+-T%y2N>2R/%Q묢ɼbzBYaD*X[%&Mp b83.b8'Ҏ'N1ieG LҠ6fdCuS$ؽxcyFs:DkRT"v,uڋ䂊`ЄfatOC5Y74=_ %ťcQ6'I*I;/]Ox&w+FГUjV~Rs^Xp {w6NKY?_x]4D>&MA+.P/Bbj 3Yg+'-k"Ԥ{mJdS-1FdADAgbWk)Ұd ;M44>d_FDr3#PXIi̽wgWeՊS%F$զ;gQME=E_4-br\TI=XEnqW͞]̗ڲq4z+C[Vg ]<@ܳxy2A ǚvT/=>(%flW"n>s}E)G>NRX&S\ַYx>UxK:yt=VPǴ~'+ϨÂ1j/<ϫ&Sk~ZYƿ[8Yb%XvykQ: m!qE} <{ @YJa88X,y;9)v.²(  b @K۽Nc6;cfh[&`ɍ)3ebcѱ@IJ뾋 dM"<۱d*E+*'%()'M lF ) ({0"tF𚽨M,l|]z?t.tѾqݸ T .1XF0v9@SXB2~Se`e> MSGSAcq)X`=(EZ[Yc=STrOnmA#Iٻ6$UbGG@srٽ~DQDCI[Ad{uuu GD!$18uK@4*Y b:#Zps=W,Wu_}2l%i0.NKΒO9A] VCF׌LE# {E26>W}e^cͭA[ 'zgP֭ ^\r6:H朻5S<PriU]_kܶ7RAغz+VzёT[6B ~8 4_]t"@͋BŇPP‚Gô戀u& ;E0ZeSRJ}Q*98]=;C;)=_1oz_pz|`8 ?u<Ź39^2T{ח~H/Gy4)֑BS;L`s*N9u/ΐ(A +54rg2x`Ov\R9QOu}@oǾ>O瓇>3߯-^ٖ=`[X/ -<'8ѯ~pu5/NM.vgr?F26S7LOBw<:2cDM߮ߐ ;߮۬>uGyCkew8;{3c.P;kIç jPW^ZO֏ЏLztX=?| ) V:E\ #7]Ro9yt Jh&H#.pdz_jfӹk&+ceTe<:6@^q\ 3dCv%Ŝ*FO~|W]}'ιFtU?"dtSI4Щ}qn48k"3JYTއ/G Jy0rw?|WMQ b!58,rz*u e70bi}I EiPkMLoT@C@TmgYC5dt(QӔ$ze{ vae7m9l d"y%S,rţ*r '? iA>v\=u:xhW#Jٶ="|,ǂZ->~D5=3zefWH~^ܘ{Sdpa;NBmkͬ, e5 x'a/--N 3wranm]oc e[LV߻M؝Hm Βz1K 5s>z!h09y(Pٚ@ov Ļ_{2> !W/B&(ԗBSPjIk"jm TtQqʶ)TG\4RDL`ё \Zh82=i i 49 B 9u.YtT8Ⱦ)bbV !6}h@Ė]Z[!iLxD"I֒"bE#V$>xcә;ʸ2(^P4@8S<긢#~?jAN4#y<IdHݨJ$5RYhrT2qDbڨ=IT2|>8y "`A{dEDsyI(qG(S/50HK))9fJc)9fJz\ jJJĦ$)IlJĦ$9E'4Q-nKR-nKR-RݖRm)w[ݖrEJRKp*3*8Wv|?ܥ|{lЇpؾh[ $ ^TkO f>=6v#f͛W9{hd=.~^(QX #m8j1G iL$DXܔb? ] @ʹh kTAҍɳ z H R=%˾Pj\0JdQy9^,HLQKH! i 3), rT@ dZ_N 4%_OިFhXtW >H0Eۿs;-̆(g#i|bG x.Dgշ뗵 8cwHW^v!h KhP6PU9EL9ETNDP BȌdbKA8+"dB4{2B9l^+@fOarRGΣqQ;CH".PNm'#c }Ea67D@,|-RykLlⓢ-sDlMS,H傌*n)<^xBLe^mMrYIp B,0:J"*zF ,Wp+Rm,, )X•iU,I+ƙ4=F1KLU {Qi)]޲9[Y#W5:|Qk9dж$%Ȥ6k"%cLb)0V'@"T/TdpILB'!{=jxQK͕Wuy3k:^g:eOvɻ+ɢZ`w"0-ʈFXAd0YG+c'=uw<ɺ χR\ߑ66![M GۘiDb-Y+TB]$KZDWJe^ LDS=~(y'zZ5UT7?߿xxt x@t$xG8M8h:+M ϭQ!ihhd$H$iX THỷEKw5kaaחW }N2ה(&-"b!:d )8͹Rɔ,cI#y.w(  1ʥ;O&Oa6dq;WZj V*d OW|[c/e?;nGڬ6n 'ܲ$}ݟOJp)Bu׷c/.&W;)jrPr\\sOy?4sNb7e_ͽ?Ibⱦſ+-n/L[L2Nl(8 h<> '/ӟ(SF?*gټNWMGC7w~_jϧ[דW U&qnΩ^v?b{r·RI)v:=SԒEݑqzBDsQ$d:c_IT_LPl9Cۘfke;xg^Iϥ s΍fܬ%./bʴ|Ȏnp"#Σ5ŒQ.\{[gّ.ߠ鸺X7q62,qݯefN?TP렭ǽ&gշ>/=IgEipv!"Ań8aIe=+IGg @YBcDMHAoϽ w`^y` UD[wLjQ{ ʐh@i N<  c܊U&'4zQ+ڡVL`$'FUl9FwH9w5X#kcCs(S45e[Ǜq%ۣf&{%Yec[ !-9c'Ƶ!]]C/><+5GE\c5IIONb26Fr 0 Aznl $ r)IψO,{:ǔT6w_hG6Ac?kdhz] %ɽP&zWuxЋswZ?T{ח~H/Gy)֑BS;L`s*Nʽv޳ΐ (A ]sGrW%VJ\tN;_*F5O1iK==R w \蝞ǯ{{!Bth TGt%m1N̔"i%#r`i'`R3{}\~RʽYfXxݶmx1fsz3k3Pr곗^{c A)U偗_eYX%¸g)i)Zt_=u#:::3yڲ{BhB+I)h$(IGdtjIO#V!qdYg,${9H{c UJGHu H%P խC9}`j(m6;#M7I1XSbfq\f\?0,HīD G~&g=o<8G0%S3 Bh]DIsjbր@""&3kF孢z{"9⎛6{@KzYko[|;7o M;kÿFR{cD4[Ƃ{?0hz?qPky̅TJ43׌&ɅKyO$0`b\bi4TmI>|aNDBH#]7ˍMB)44|nI;,$ L:2nZՠV8vߎwopUj2Dz67]mbm{gDpm{y܊8&"*|QԣQ=V5#avXPY;9&AS!dU0@I^ΰg(@B@H$冸L5)YNIٱvyP}+Y4epV(gy_9KauutRS3 6:c$tfl8?"Vdd)E Cmchy+ JerQr4[Í d<Ϧ@b~KY=wFRO$@ef, h@HL: M. OSDacK|I0~D*Φd:.Q,҃ Q*.O 9;g@lS: ,_8DGD<",S5r\Oen ;9yvQYEd1y?-ɮY녬A7d.f"2TL͝Hpy?1-7M>6M_|I ZE_f>F^~c3w6^Wi$2}}YKRDkzr-nL骙VSNֻ&wyGh;d7On%Yv3=NEs.Yiv3MjZeŷn g#ңLP~rIO--郿tB7$]X}4E # &+}hTzn$D]iz ET#jK׺_)ggJŢ:(_6XSDX5 P$)*Vįt8ziw}SItWm,_z.QdKya G ~G(í.>p4}(!W{#OΚCZ5?˸;v_۲??_z‰=n } BB<IJMp@ !yk%D3J$VX[#/L6@~6m>j2D w9(0])caE 3xq +n#xz+Z0.ÉCJ(MKU:ԢdzYLsIi XpQ$ ( 65ҳdYT}fd4P#Lsm]횱qHU,vz ôʥYeWm9o^].&ӑ><)CAYE4"cp'fUF 6g tF l8*-wRr!Ȩ)ǣ)4'Al82 y)Z3EF"dNmAwSU2BIyҝ2)p0ٹly#3[U!5W^ky:b8TDPK*=h V:)Yl*$ѨcGcJcE~O[?-{OmA[,dИl(`GyGMv?4~/泅w+;ǫ U52&'jɦVW2Rxg@ifyCZK}Z_Y& UI :J 4IRگItṙT \J{ܽ՞r@5tl'~\~wܓ,GC4(nRqVKg5bɘIZr \JCE ahۼ$sn4lE|C> 7Wn|6+oƴ<`O6679@d|ڍbPq kQf҉ӥo+| te` f[zU]:PSe"{I6=8eT[G8CHԈpf8*.U[}8کw亊yܯyhfSCvk BXkݪ!GҐߡnא[J- ŝT˞MEB;i-lyQ4L,gXK UgJQq#wjyiL.^L"uR'ESͷw[UNJkԔe Qnn7m|Z>@mKmHM1d/MWk>- d9=ӼL5M)`-y,~\^⿺WֺxڂɲkuE/Ծ!}Yz}-ϫNMgiܒbŲR oм.^}ʹS'7h>>3C)NJX(|= DsFkʹ>A}lHyp['qXfǵKJLAT1بϻ%]xo.+ծ/En 8,2E,/;3\ԑ = NuW?zk0υ. I KR&͒:[a~cxՙO9(#HύKM`w#>)`^ j3ArP:Yb8C}OPR{Z t9[Z )JbI1O"c$:mB2e^ =Py3Q1PmLDudj[ACBQZ]Ubxx;Bmx^Je,A˹q KM9ͣ!j0}m?_NJsHK$UXh.;PWg>"z& PQ4؈U!h|?*[c3rMX3 ÒQpucB U:rMI=Ř J**֧(`C~^q42hFdb3g@{ 5P(JK d:#D[ei=~~ *6ʸ2["ҔUQģ$@a~̳R QyJ!AVLC7B 湠>'(!yćӚ +Pyk$Ԁ*Md|!j0/lyxZ<7)qAb| -b@R^€#$ٌŗE#N1!j0DS"ŵ/|4y>܄WRo:hr*h TMii& mֱ8_O98tNSnz5wi/Q1jGB(~?{WFr a܇ؑZ@;n $H6ȇj4ňCU_LI8ڒlSeyfX5STuuܱ 8H*Y8@{̣h؆%SlI3'ͺ6?L ZFE.2ps,yDyd)1yЂym<"\e X"`:BR֐Qwb?&.[ -M2@󠰵')Pס2:4ȫ:Ђy)\렟0Cg{/:nvd6!!Z0ώPk;6F&ץ~*93K}0odk[F2'P%8 C(`^kAtceS]ӽ;R6Z0ojG`sJ M9Hd/&&w BU cd@Ctr@ M7F4g>eD= uf]lʡŽ!Z04Fx1|:AFוkCЄy|*:vMxJLt|%ɺC(`}sKm!\`|U^YS\%٨W7B V$.ДYh0<챀9C(4` 3,!n C3b[Єyv,&YCTm@o* -q xٜtizqaݠ^M_V7øo{{ խ7_&z 👐gZs"C'{7߃od_qx%|6*ahSu釋w?lDj}P׍cNdzEwa,e'SsjI߽:{ :K:"v1Dى? v_}3⫷v |滰|RaFeEVJ@v2IW0bKƠ` j1;Ct ^A-!Լy@g0M}y( z*~uaX  ]h4Ď[~Bo2f:Q6,AtwMVzu? Cnβl\?V("y{@` ʆ +E,QGS@QYش$A A!$cF/< |HNK-EUN:ٛ. c͆sO 4#qnUKw|a_?Nn.wwy]52BYv)9Uw~oV&olOQ恼}Ͷn]u8{mk kHj_5:z@AGA5  i?*-GP .|t5mJFZMb6xbj1am=`dx{hl.y#a dcE0/ϖtBcyMSN5bD4,8BոrP?b"57EٔvTD\ki/qJ=-JZd(} O%hI675xj ꋈ3M |<^T;0粙'wӿ}ۧ/V蛃X"Y;K@RZe2+m!+Y bf*UU&DDyW 㰧~U8z.{_//"mƣL`Ao#dDwFR|6:2? (ʩL>(eX,y%ɤ Gh0֥ه$ځ ռ&q866F'}-Be $3c's@Jvഊf0*efƻZb1;%{uktj8u9If._9Zeb{Qm˖8Qh 2DKFH.(7!a3 w:ǐo%|KyucB]X/ Q%i7)$Q"Ye8mjx>łFՊ%);" rA1-h]DA,ZQhqd`3&[[V sGI>BHBP $%v'nr~^ui_޼.Bf5 TDBo*+EN)^LU\UbQ*sjd\[ZRS϶xl^߆l-ܝ358^J&O7ү6lI~f~gg5hqؾ`<|}NNiS8W.㱔F$>u!*&%k5 FC-ݠ5yy63 OLh]`Q(mo*TfQYJ )&L&ćĜ?$Lt1y+&c[dx^s>2IDD@#jv2@YBcl8Aڽ]L ]_yPIZɵAҪs9xSXB/]k{Ah/G]:mrԔ)Pm}q)X/8z QkUD*gXϲGv ԛ*4&<>͡TcRQFV‡D,P+!b7 hO9KM0p_XJ(E-1N#!R *YNϵ:&6l* btԱcjڗl]a=mmAp3=fEkE$PE )bsjPrZcǒ/NA|מckckcҿ'ϬSܶ="AZ(ɐI=U]`]Z=+;tb`%C,DѲǫb=bS5çn Ob!%FeEVJ@$]M. #&xӶ2R&$eQ/ljm "xDLE-1JgPu_f99 Sj;,jh7z>o9eħ!6=m癠w>as0:ozYfgz:_! %^u $Aiw?=F2IdϩKDyH"併:uvXA* 񳾬 Ud&2P ^y +$۠epzXTϔ%-P!"RC, 1&霙YD+25"T!ǽ ]1'̚,LW.;ϳb{X&6o`oq0͘B j*]r o[ooZkౡw=;CR:mT"uɡ"ԸQm]᩽f Q޼np:s])5~AzSi}.P`;*?d&"wwU'x6'/I`r0euK~Y* rޛP ;c [pz[ɳv 꽿#{Cx`{AygK>^]e\fXHp%rz[]bbZ.5่UVZ$k:^vY//D,B*m#X^~](V _Ǵ\h |3 ;MۅnAj@n;LCRƘ ?}jch/?M̀@—&d= 9.÷RQgà\bjtSr3닯>*e%EbX- wD3 @>j(}(}h}0iC ZK(ٽ$r%TLT#U`GT3(/N9;I.ш?ISY+CS%ʘ4ϔY&j&j3G {\ʳ-BlS2}ԁYjΆrvr%0*~}*,RQ sVZӒI)@@y#uK@O$7Ri, vfl $+)2#U )7t\vڕr)L«;M< V833*`Q(TOH!<dFDk9 cRʙR@IT>JdJ1ʧgz =<ᫍE>S%X@<Hq 56DKEOˢKM8}P!Hex'S0FBT6V:la_z8WG>s9GH< V@9$4&Gr6Z=UXINm\l$rC\+Rq@m|{t@YtxrZK77Mz GC?m}28_RNZ&MJF]rT̂q47th޲|ued*Ƣ A:iY0Nx@9税L9pw5SKKA:#qcAˍ &JĔXS*N%p6iG5*rteDO];#leULeLLYRP*HofrYVHqOQ6?WTdB2C1T:d:.YaGЊKo!~i^Au.U.l[rKeN|2#"PhSs> OOg=EΌVW9/kXTCݰaJ?ɰ}-Ʈ5ƬWZ֔}hdK?U{'qWfU?TX>o%?l+;1z'}<Q*XϏYcW xC)v'MZQ~EıyO%X.Өջ-v)AB[ Cӌ7_BZFZ[.eOKew\buZo}گuSMAU_R/_^T~$9,-3fW̏e>=4Z&™ӕ|;p:K EݑxɻDA{e56-.*7~ÅlVXDVy5"8dZ8節zwgًGx^ O2N \},޾+)/2G;Tb;8YZ(dْ/z\# ~e)%W\:N%u>Ku"5 qhǧyl4O|NŽZUƪ*KExZk_Nx x,D2" $\ 6ШBH@xFTI z;1ݜ^|RZZY,;e:0B:5dY d4P#L) .jz̀G"9d-U,rzj%ӢU.Ej} g=;^{vvqAe+ pmbVcx8'*#DZ#e R}8!H#)#)4'l8NTg^JE=\(i,MfgzTe!QW](6)Ͻd?>Ej5ٹ >FƘѨa僱HIG{=p-0!z@4 oPKjhfhòO",3J<ܜ ]/SJJUDL#i3zㅒq~8ݯx 3uC2b)nF}P͗>87pi7'FՊ>ŵ;.ݔ@'%r8DJOV%#o 5[N8S0U@ML C=sg5)+]wb}jrZ>W[T%L8k.E/V~Lwmw;=Z@&{r8+Nm }唡n'DŽZ8V@#A*ԟqalwL{B]?cEJpJYgy-Pq! \4+d:ˌ XkC%#hCEOKփ~⢧cXm|8d{u;U+p۫ئc%DeW| < s2UmlDXӁ&b&" |!ƹ4}WckLwkMw]cɻ×Ng,ym@./ T9('+n@E=sB3:P/X@,{('1{h'1{0'1YU MQ ?dC`)9~SV'Y {tg`%~%Vy)s e/k@C*'ic$1 WH 76hE2ֳ53`frIK TTGK#D e:gdJL+`ȱ13cfzq/´/~0]Tb__?Jhabٚ,ۼUkzGz5/j.KVu+~4M Z 2zEfd#$)Gx_scV/7R" obWm]ŻD46n#$#LN`9\?RrM.-垨$NVVMifOF &24ΔH"i.LHA}㧞rm&Dkzz]rdmm $[jNM(ylϭٟi^BOYe+˃|SSb~7J[[ȋVϗZ9+}i}?hE>A>4Y5E2$e[ wIE.%Z:>ٙq ᲓCkJ͇l/b"sNi@RCM'M!B1/r6߫7O~2u7VČUg1l.3@y3RRo~߾SA\h]}'3?M%F_VM .M_ LWX2c '\(揇l)! ag~ԝ5+l: N7⦍<æ0i:L0EKqK|Gߣg= Tا)POkzF,J)͵6{Dwcni-_ztĶ+93͌UVgUqق|4^ ΐB=DWqA˭CnJcՙDtQTj'vi*M ۴BHҦ#nc+tm:6qMGܦ#nHp Lmx"X!%mȶ)۔mޙ6et2M}`R--ګ᯿P evm2kϙ%LO`elT|^0!kV8< ʸs2)^4و>ck|d$ R-W!F!HcN) ͱ?Br s T utby'2q/.:qvK'[cs< X݇<2,}yR$rҁ%ѺL@1c0(-cRY2Yh'fnN OG֓#oSP쒥߶37J%7ȷ13o#%:(}4'5Oc{L9z,dUbBseYjn4-6CIRe4S`z ߨ 6+yC[a<0v1eE}TR) gd  b[Ox^|vFLH$: h+MXGKrpm#q8S ֞)MX2B`IAL2(gya>+֝-__5ڿPS6,`QH7^Lk!*YG|&F=QhiZ[a- d/z 9OUHIe>ٟ8(gL`(uOTRBbP+GDSKk'=~;|N3~w%9TH+(3lG1`O1jn]R8$Y ^f-y 4ko/0FM#"u RIaTJK KFDHZz`PLmGv=:oBȮ̧ 32lk#N1$ $q#X=4qi Q>\$@hc w]ףX!K #uH=}6-lǮn f/ojtc@J$r, 6y`91ŃMfqN9& g˿ @aĄDmZk{mP(>۷J]Oy Mx8e<$YOJ[JhF%@)gPDD(!cRk(c$rqƆϹIQ-Mђ(JRy/1_mX,ר|0ʈHDk$~h4$xNNH+sDE*IV39$Z u^r+hf1bF6q֒ aIbh>1e(r` (.3h>p{,?6;MF=㏇n ϭc2@LVQ!I$A\@*9P3L(ri➠dmYt -?j*>g'YNd(E=AM"FgDf!y$ %s#(◖2.Op 7'R]:+MA9?w"HAG@YDXLP 1KO&Oi7g?=,NQfR? W#E(:Q>V#zԹeg׭(Ͻ6٤2Ҥ`o}zQmt|ԬXW H#d4orX}el|5XJWScӐ/;͞9Aͫ6~<\nfXO-"yӍ-|n)B!JSwC~( 'z\<̝60+!Gz6l hl*v\AszѴ ?fN'bH3_)5he#G('t@N/pF#ݛ)tm=7(eG SXDX]*vx 捻. MPu1_]moΩfFU4O4E=?Q-4Ͻ. MSn vU4P$9qwf>L~9儧si >_ .rlN4_f~djro5uvJG7:n#3F ^-1?;=J5zܛjr﾿t/p'=l}   QCBDbQdM QZf"e y) -z{j3hn<\"y`RXD׈;eXLEjՄ hD,.<d \|Th&s(V)m`ųV0b,=P`ŭ6xzP jZڡ`6Z*c̈TPbmW#&A*a1o0€ J025ȜL' }$$5*XCG"%)C$ 8fZ4Ad0+֝A̼ps;v60c:fOhZBM ][i94 Z\8]rp I@Q?tpJigP*˨rx#=)dl B]uUITj '7L&%!ÌN% VT;oe?Srg:y s@k q%cogI@9AY+VDEA nBķF5߳K|6Fbyiإy)8I;0y~vAyCV2o~߾SIx=?N WwN΂jXVT-ܣfJa+ͽX@RBR0Q8 |Il/=[?sݳMetO6#uRɌ:iƻ}F랍̧jh 5t:D/{(x'9> Et4FD vZ1<n"\ ;\RfT,E}h)حm.%eKuI뤭M$. Sk 034iGcݙǝ^"V>o#孝 Fͭۓnh/M:zظD;EynUϴ]%,巖5Z$ˆ$h^ Wy_{٪$41=y{UWi&0aZ|=I"?:%vQ]f%C9n 0@{ЂOhڨ<QJOuƇ.\;7V{3?]wcPف=1>U'q9sAvVs,񻫱7~6?>zN澯;?=)cC!$J(ĽsZu{]*`.xŎOZAZIi-@,Jn!`ϔ2" (I8RΙqNG]Gg"x#jM e;{}D`MN6!8a<kZIr U>JVQR.Zа7Jqgh~h|&׼g }y5 "o#\sި # øyg#o_(NpMs0>7":F"נ=)=ZK'nf GN|g5IWxa[_6>y/мid{DhH"sACN&RNHg9I]L<ܝRh,x 47ށ#~7mmnv#ezL6xl39U-# :0s?69 'AӫB /uY HGk""[{md>5 (w]=e%8!(5 R:K{SR$*ɭ5:!8!R ?f5mIK9 A=|ɿy_qC7z=ӶB÷PyGˎoA:{PEjڀ͗&7ޅz0y}AG0-χ<*%AV $N!L%ʬbN0p5@s+S:Q r3.ȺYj*Ħ))?7PgmIinha`d  %EI~ULɒE[$e:LđwuuU)HịUht:@QAsglrD`p'&&H9;DX1w%o_~F)Wq7[c\_zfdŔ-{ĒӰ䌅pR,B`g8XB#_W]C 9 [45" Ni">!nwS)_ݶB\o׷G|92څvMkmV궳 MBKO]>zKꃾAI~>7Uh*$C4sb[qĚs"MM||Wp?Ϳ![+IwCEϳ\ JܭخI~VBނ`a#ޑX\˳E9na.SʲĞRvwg<,iBEަ:./Nπ+!i4q8 }>NCwicf4q8 }^8 }!(#wAZ7q0K/n6fp]u=:lد"qG&EL*9T;H4_~d0B䌂*cRb2S9666S;`-3L1.1| qQOQﱥFYd =!"}*d8RqJ7DŜ(a㚮 8cU U+NSN쮭p~xv |ʛHN!QF66A"HHP\јȚ>~$Jˁya޵xw1a>r"d<X;vo.h+DSkd(g6aaHaImѯȝvB{p, Nf8]?Wh{T#p^h'F?yO.w C0!RVa S띱Vc&`J{-#chnD4VݿX[N~=1w}JxLfջ=az9N?`.0S|ʩR ˽)Uc&_ٖ_ï#>J!ߝ)|C Gr||\(d>ZRoS\-&x#32x5sf"pa#V]5k QZ.Eǎ/@y~|2lhda2SDZK(Ir yIEt$LEt!"'U2`!eE.qVqLeYFf9AQN&M76HHEG!{C<FHHjbKFbb$S, A9,aKm#AˁN8MN5i$ +x,pVQ'^w QJmƌ6>f)}Dcr8SMv iV;5~p?n;#d7\۳ܧtٳbUe;XZQB kh+{1%k3ZoMZ,n[1*U,xqXJo<\ō%SLf䰋gx1l(e(4LU;`UpTg8jV ae 6Yd<78`ٮbKl;GoUw3:w)G-jjr3_-G5$G4N,qkp7vqˆ'oFM-o:`]df ͠}gz4` NL9l_vF3p0?{cYopҵod',7{*^>7[ +W!u$w,`NvC=B؎gkoeRKZqLq٦`ffH2p촰#Qzثnm=_Km7T~KY>"IR9vL8Rsg%{fD6^/O|4P~OK<6G ܘMЙ"Z)m55UL[~>_Q[h_JMጿfW/oP`A_/k2rc/ t{#W^-]82s(Knz7=*TI)nڴyys CRt,lEV/:| 6C(X|8ws.8x1B|#؟FG+:3ҦhsS6J( 0{SwHdD@#*#+[R aEFӆ1pjYW_:yPSx h-xTY@Қ*$6Fu$"F0:Dh u 9}aEMR>^JhFH J(*ш"NDO4՘VWDL~e2BO=sd6Y{at%P\q͜tk<܃TA050 !"X=O 8yX-G M촜@NY_;B.G@<5Y&i hp֢1I9B`5"BHqrRr) ƒv EJ@tZ1y)ɆO{8Q[T>h-8R ~0JXǝ3h=XvE1 y)wC}Aл<)VgǛ#YHEmؽRϽ=.=Z0$R?MB-B݂TjPM6]8\f笳zbEP{F_szAo~G=HT@pX3 {w5!Wj$ܾukąn_nH^G*9TD2'Tfi%   r_pǂpol3RYY/pJg0!Q1eV C)fV.,V4@Jϣw?=Oj?Lu05QŬߏ̏>/h]ױRa r+} 0+(%;6%IqϴH%4l+F}D_M oIރŧ(MX2İ Gx5cz( |a\5lj6xZDSƵ`x}+6\LG+/ƿ̂[mH1DZx{'7"~HRd7O#+KIa]HA)y/W Ul_%HXiC(XK /ǚHr"VQ`~E/L*' <_d4O^5oei\_Q+ 2?yoJ6w) BSXh{cH0?EM'rL4\6-(J=POF B@̠iѯ?2Rt0Z4;v,X?/?W{pe# q B{y"X$r/!͊Z9sj̀ה$';ݠuAl%+OԔ3%(8KZhLA#`j hz,"d67m~U I|I̹#{Ni$|!" %a J(,uɉY I/#9h:es6V5 I "o҄ Fˤ8";Y~MX,T70҆KOn>%jzI[?W|b+=޶{^+A%l~e~ oо7[ܚ'\^$Nޕ6\a>,`N0 dž{:%f(˦F3UIhR"F٬zW^}hg#;Up[feW=]nRDR[~V(lJ,@R=:2EUlgAEEZUI1eC? D|BfrFvW}##Ewu lA3=!~d(=|4ݫ#(ASN4z;rfX>M!u$Ta' !X%S 5qz"Rzj8q`vR>{nNB陭w9Sz_<k7WE}\%CEfFn iT9X)Pʍ"6Ȱ, -J))$dI>`o!,pS7( *eX\yYq )ߚT6"k&7XL [ͮZ~c[vSZog7:J3Q3Ag5nCj;#)^=ƺ5h"J<j?5F;qqۯHO%C~c,aN(EKf--q"|!t ~MVeh[QoVF(sDqa@RҞ1IQysEqc p)p Mk 3wn ]RuZiO] d<*lwLk38١σq3z7|RO4ԉ MK-)T[3Af{ تIʗk\ ˷/_ mF,B0#BXYL <#"b1h#X*c"96 6l'ۿ|A\lΗqz?fq!":>䂈eq1VYE 62( F 5!Cķژ+Dwl=wozl$egZIZ4-vׄy[VAZ~ĝ/_\װhH.U?{]?~u %/WZ ckoC1%0,p]rO|"J (? \[A쟞#{~67~ӿO}Z8I/ l`,>|ټ{8ڵ/m΄E+r#S_P׻B]/WGeVQR*+&՞:#,RFXGu$ZyK{0>BIki'!‡p/7)rKwa[v <|\#Fk'A_z^nC7z'xWrlͩЯIE`'i Q%ky[Ӈ|1`&{AۢJ&י r=ߣ텃]ZSQB|RtK9Xw_瑬1Ij} Ե%ώa]\%*̊-ӛ,Yydcv"k/%%T -XZYh6. Arl*|2%XjnGf {w<1o54sfv)sceiY:xcx u:BnrJu(6.*6d&o%_ xc^rd6BۦȻ6h/b^ ~s{ Ʒۧ ==Ւ[U5=B5M_rF"yhQx/4f;o1T31Q==7Z-Ƚhh VuDtҡS|>4zO3]PT8'b߀^?Oh9lZU̎ z?ALۨdSęȞ9*:!K\K<ǒƃ.@*a[h/Z2yڛPCQL4\sON"4 q" (0B%i楎zi>8'pL8d%4FBJJpp^T(.ZA۠FH"I`` 2#$ ;8[wcdC/XrJK?jRGfT %55H>~GM/c&Fԁrfu4TA%b4bmA-7\3ݭ΋ο-pD+z͘{*hQ`"bY+u&;Ñp:lFC&2u1]mYof~Ծ^#&nѰXp'1!9׈P8 %iDIm &Ӷ|$)&&"*̬|r| iz8H%Aoőr%nQ:( ܩG>Xҍ_/E;Iȩڡ2kUr z# .)K{SlnPW1e]3)d%)ل HdEUJ PRV|H)C-!p%d_ (GSmJWMw~*ÊG-0cE"CƁ4` #jT`rq0e,_}N55)Hp c#f7:<A MQe,YEbKHyا0l.a' Kޟ8Az@X Ax> 獣|%_n7R %X1 SX/U gU:Z+0eտ`s(/ß6s(,XBTufc%8?0@u욨fO+MƳ#uHR6 z AsX>kƥG7QJGldg"xTCBs:N1ayY_7[cg-"GXA{) u)&wqw}J%T v =-~[hmJ`T3-?UvU[eoԍu0ZUl M!I*n^ ?B{SDnN J1u7ƒ|n@?@`avD-6`M@ZX.*l. wEd'pZ F$$cSz_*{x:7j.C h\P՛+Yr._u7i8OnWc3z3M&G=$Q>|tLs 0^eE{,{~ş=X5B1#+Xcu85n7S B< \3&q0XQ+ Ю}YΫ3T6C B-$5h^Y<<f?Jg fP3 Vgvɑ; ,`fhX%%WL],ۺ$LLFDq&owi5V`C\dZ_ݕ6(u(yD5 = :[%A+3ًX9^ 5,BCv\eW <"P#<k)ߢ, QȫX.sB S0IINa g[sy&Z3[8v+\5V[#|ڴvt?aPyrlDJxnLۏ\A;hk^Mykno~I=@%/t)0<2#5 ?>G=N’\;Vm_JN+~Ӳ/- D~7(~%PR[KP&0hv21p`Z??[KftH_B-l#j& >H L\ A"kejYd1J&(ÓR 8`e`D(Bzm{" ݞ91S/tNYV/l|>):{#$o%(q~l4bɟ<>is"ov׿>!&̳n?w *e}*}_*ڦW7UAgL0~@).ձ:m6}i.e 1pj`fh끣ӗyW$*5M?QE[0ƛ̹t::xp: %ۈ`dGᠾ4 =stmXT-V U4 S IP+˼ߧSuN@.S%,Nda Y.c-&Enye ToSi Y\h;aAEnɥdXf/1h.% Z^N;1XGk vJMI!iiCN4G)G!*4ao y- o+oa%p%u&&Gv;[d<+T.v+x[RDI 'HA;ݡ̯3"tlt1p9B}Nd! ONRBHTyBwPnit7֑l`']W}?T~ؾ܅PU{ܐQKMoPޙqtKA;IZ{V]wύbtrbo#6m:7+gw{jKyjFWw_fߞs4q5նfwCѕl>Z}v燋w|ǝW>j~sϼϘhc?ң o!irč!Oz rh.q龄Oo(|5#t|%>k"pqd[X6IȥnpHJ.Q.90mxUЄ\z8n{ڟ|&P ^b)RՅdi5:f<"ԨY-*Euu%Gn'ɯY/)gDE hPyuoJ/oLN9 rQNΛ@ӅFHrtƨĝFIn< RRxɂpFoxm3SN9:y*%EXϐyUv B}J9Q)>V J :CN%}q2'>8Kkvu%1Yrf. e.84J|-xU*@è$<9WmzN-m-9'Z:T@TD|Tzr[3x7s34L)١o/Pt~y뵓^!] 8]R4>Fk"J1Vd OY ,"Jutt>΢w҃rl F\(u6{%i/eש/,SOO(r0uQȃD?4GNi Rt`U)9SG7hCm:Թ":ךL~׭r@@ًAҰ2>":6@x"@hGu4H|X{}KJAf 2^gFpiSv:5(NP FsƀrjA"@H}R[A(.qtJIeEI*P)ͅ'A?P3Z҉=4K_8 4,}sW֜ݙ6&F_7l}e'jS__:'oP~vWg7~;IilED*9hK68T =kTk2Ex{19z{#%2Kv$3x-2G(Q߶D Z{滈 l!gkM+LݙȽŹC.7_M:"OryZ6Xʃ@]2餞9D򸁎OF]/Wd<}g_DCW㞤OZ?Y.9ѥ%OzAl.{Y'nE{&C㵸uAER2!ЮBaNZqqy_~QyamIQ{xLE I#VT\焥9aB/d[ui]&؆DEo8'\ ]3T>֌ A*Y26.Bh+\gJ̅G}'am}CV>d*lɘR,1kޏGB^9X5{C6Ppn,@pXY!"B62HpVU 2D6$-5KkvLKh ;QHw) ڶVO+knۯܴvnmZMkiGD:lOjcK3-u=і'!M9Cdѽ8iBYۡZKOI2U~&"K)%&"&`cllM 66&`cll#peҍmpkr6BF9!g#l 7卐-<6BF9!g[mr6rF9!gF9!gmmƚ5BfiҚ5BF9!癮[! 3y78_jy.Kmyx 9s6Q3l=q6m}ةP ^b)Y(JO?$ityDQ}M,_^\ xV|2uIrFZD IIhjk /Ӻbw;^o+l6wp=`sP[rI$nv[ޯnpOlѫg۳&Eilt9oUBTBN!ɕRw='RW[~T SS,5P]c&e%HDI'VzkΞ{5Cf)c)dAb7zWqRQ as;!r(9Ѭt^|]uMy٘mE\%̥RFb ׂGٙH FҠh5;#X׸m[A,&Uӂ[A-xt hD_tIc?xvUKXY2P<1g5(N׭B}ﮂ\2JbA¦,}kݳ_ XnOwzimIڧoOy)e:`8?=Trz?<6urOM~n>Gն.qx}߂:7cCmj^ԎiQ֩AA*g'TdWcqUCP8SJ|*T G v -uj((kk 芿 ]mCZAZAq4E;e+VF:ڱtcz@]d5, ( "w,Xn*flMW?t Xgkk rC (bY eRhZURZ`# qw<5{ڏ~֭ohf=?x7Uh ߠމѠRE1@źȴEK[Y3*<"_Jiו2瘅V!&XLr(Nu2Z'n`Oaoޥ\ ,k^KaU(}\nAset?0%=996tt"tD/GF +Iz@)KnS0R`@1Gf$͌͌gWȊ#+L eB*}Tċ֡Ώ0Q(p U}pw!ER|=6ldOOOu5U_LFkTɨLՀ`'[NGo0/`ʽ(gQ0/ȹ$䱻6ֶدGbWrݘ#a},׸~^]%i\{lG%}.) Ɠ>Xo"57:}|sD>yN rH# be}0zPs0+CʘtcCUOg7+ܸ7Lm>^]:`Ouڥrk>ҧW3'Zܴ9ISNAY\ G:mL|W>odtk2`i~kV ^um˗ei\-0%ϺE\u / EƾcLְګ TE=I6m6D',!mۄK=ȋVʝ[ nnE0ِ/sa ^B>7V'b.aؠ2=F] :x%¦B6McngL)1E&ߖcoT7,x24k*)봈&)7ա6MYQE{p868i 6ٸAo. ^wfsI7\=SL6sqvp$BPH-Lzsm :c'+קm=jJ0ıbdoHF*eZ:%Q(1alG2c&Fԁrbu4TA%qhV''jr)Uխt\BG4.9׌yj&`3-1@r}^sҵO9B mŒF2@zԶe6o3~lF{*C$n齵} l%wCmָ4( bd\ dկYR`vS;ܙ]UtƇ@+Dt33lFuf~S+-?u^M^JbP.$E4#; ܽvbMQۿ?/&nC_w@ aAvD-6`M@ZX.*^.=[Bܧ>%9)m/CFć#SrDognw#OEF ^z1GjE^ XKVۦe v9x]iWwUenX6Qo?8߃sg~G{p{Y3_ n$~>03N_7_ u@\4[w!gs"df.xs^\3KHXQ+^c<r~KVm>|X'(vlJ;#nLbᙆ(35~f=_oUOwu_ ҫkvK?>#3.,31ڦCJe+΀ o :UÒ=ܦ2SH 6ӏ%"_@v ͨ٥{@r]i8)ze8%M:Iʢן1CpE'ns- o+ g3My4ݬP[08nq"Z5e?|)/,Q~,rU't]w9!cE(te9djYe\IjSI믺/PӮolKyfs+  v B2Ļv}ޛl_xzi1X>|x:YGV9 ,w@,Ã&Rɇ~m؇qge3wH!ߘӽcYϫ3L_.TzrW󥳺}:oM߇,x:=`k  FFOyj/;^s }r-u7rU{~nVy9R$ <ڐs}/(OS)NYIu[_`e&̞־{RڴRڳy_L-|/CU1ܑAQkB+hIJ(U~U;x'7J+fKH\H1q6MXNX'^ xQG/V/1ŋiG&㚸sA|0vw㯿L9;s[JL.0"~ ֤}I;wۨY/x>37e%0ZSNQJb 5&b%$#`[*qXOk4S`]K* Sz!rƄOO\lG43whz|ҽX r7!lcinH@D(^AWP } H-xomE ؝N4eڈC^aw- . Vǹ¾m# ̙Tq]%*yRg_0UXr:3Qm48{ "f{3/kfsY og:O_X:h< ,KvfOOzR)T7 Þ)@WG㝳LkUX S띱Vc&x-ˮ0Dk45zD4ճu3_u#|M{xἘ!IzClT <**RQCvYN](J1,,VJ\|V>ă<^nnhuÛvvҁWp/[7TH. )McXJFHqW>RL[죶ěU[g& ڷZt툮يҳMnStrM m/GPn/~+& (eU41Pk %I.P!`V0/2.DjB,9h7XV+cf*(Su4H J c}9l4 = k1 aL+ : $S, A9,a#ֱ&0&h"EZC"U#H3띯iXW+ENO>yĚ=㊵ U8့̨ͬ8Rf"4zv|P{ R r=A,h`@9 1?@m:Ga#EgOE)1#%(I]P92G*#NHMu(joFI*N/ 5-'{Ȏ`H HyI}38Xu$yf>edY7{Xlv}*C,ǔ⁦3DDRr(m0ύ:z܆ ->?yF_e)MtY}}VehshYNJe7WJ%r9Ha9q` Nklv$ZOXilW1Jze`i70<Ө$p.^xL/1"C&;Hby*$>(ߌK-d>9H,`IZ"YBZ!phա! H:]oi|cc=sLԹ$rݤi#ZQ(ȬR ߹)Y]AHQzyW6K7mn벨m?;ˢl;u\JՔL;y˚ *(;}Suh@`& U"`7 ;XKz-YIb R&^d1$a:g ˈ`JZFq:\}rn}87k* <{R|Vzgz>[G_h R7q*T5!6L5>kE/EGM*W3pkhĕ4s>sTHك3'CvJPy<ە Ii8M_'>~3/Qlg_|d43<1둩,i)vʱcGgbG -S2v8 _pg(z(BJ傴.EE? \g%l,p-",7-6LfQ>:B 'M&]t,m"w']qw bnܑʳJ_鐪J/.OTq !)y},?{O榬-ޅf4~bŋCs -}{c:KK&|\2j *CʒeÍЙ 12:zy$4 q&a|;.^O"-y$l%lDen:\$nV5;;>%ctm,OۄxX_D>hQ̋m/p:w.O˖w܀05˺C\ 'Ό,?WL[DPUCUoPRi?@jXmkqF/7CKnUnޭtp[>4H1-wi<JWG jS혘 I#a>VwhNػ{fɞ>9ޝ.`46f9ox!Y,Zmy'3j3 Ί8pO;.P ]ZvRe]Qtk>s o{ިs|(5vJ OˏQքgrﻜY]E|ѕָg IE)r2DqX,(JY017sl|Nzy0]]rJSA)]Bs@H$(hlN2.)@T<-b~Z.%ݢ,y%z]Stl͂{Vi҂Lt oE 됭P>+MR)ZY, vY&LQD@fQwT %t^vm] Gv.* -@^EҘ h 0 &ٔD(JtB+D_: [|~^3*SVaA;(_:d"#1Ph)؆g>?^?#ߖ+uY/ 露qnr*7ILnomRDwkݎFӇҳfK?J?[ (5iG٥ʉe۫熱whd~] xKY?W|5k5{Ř=H>zh^&/?PnmY6k{z_?̭#{skr84Nz)mޅҨ,@ 韯if_KY,@ SoԤ?Ddhq^u4z_Ka j ڛ8u!)I8Wn㒉1ZJ.ÕLҽ<]M6Q3ZaizcɻĬf0m.-*v9Dy~eHi+\Y>>H0r9VU-/VEzO" oÿܚWҢv}iH"9sޮ6m+0_7zEt۽YZ{7"-eGWE&exvM_^oO?iyY)̤z׽ҪC.vƶ{$n*Əh?9y#hR&Bb@?q&+[ ,j1́uVЂ\^rAx<+XnBtEBGn5L y$,=Fdd {t"`:t'~<~5~1A'O적T.bhX@2ϺB)t%H1 1o#Hi,o\oPC)1[9cN3cA+InLUHq1@M)z༷Zf1ZӞH.{G*)p}fk:+I^5NOd>AOˀig&%7pOۙM9׉Y2잮ZFӶϭ1ެk9Y2f ڟ'W4 pIKfֺbÒo9jE?~yny?E|s;q|M1o+:LEx| B(uSŰƩ5^TH.:WTu}j>]:g>Y9 RaJOB-H2~ *""->eKd@6jQ!VynbHReUm8 .]w"#[lYq}>}rQMvdGqq8Rjʑ;#1вao %hahtYkc: t$v_ ιsD8j~;{y E3}~m=Lbu,;ዎyc|1g+h|7RҼLqҦ85|7u#Q̗t#H%?!p%*OjPo%h1slN$ˬȱX}޵#_w],p;A"ips~,+zEm%CfWUXz0;ٔ"Lji

7N'㍓OzW& l:{]uE7]Sg*/Ψʡ(T윉xj6Jp"@s!dw2Q $rr5U1<(" X2!ze \ eUJqǤ䤎EvC.PNpǭT 8˶#08:Nh_)v4#N>{l?l˂qmR6M R YRI[E/$wO.]b9E?aV8@ &L ܢ`ʙ%2ELj;M>IP&IhNt1*-X Y1r;:T1[Ɩ0j|yY:F !$ĴTjc&2K\2$ H!p E~-)# ?H .)CLT$gQZ:jg ^@ jZ~><#^ch.{Y0D@@hP` hfQFLaq`WEY-9㉓QgɒK#c4"RkYK !P 1Kx!/pIE4 mis N\*|N9x#,tHS,Dc# Z9j4y $ţxXǓsbrtҡ b .sӎI4Y%׿=cbV˥;f e } HC*lL )dqzR (h)uԡ愪&"aP'(_R kh832J 026\Q)D8T(6FgNXQHQh\PrpQi)r,ҦbUBrǴ<.|VDH$d &1p+:+FXk ,IFQ%b|.f #RY\ik4p˓6\xwW()Q2%@~Pςϕdžq(S}wZζG I}E28O$i8`E/U.$3j֞_gw#ss](@;(&p -DFEB6ʹuRP4c^ )1?'q)''R)#/-֟YD4 ,:(a8*Q??V,nJK-TCorg0`j G9s6ɬnmmӻ|{V]>wM] 0PGM#d7T7OPwHYu90ypba ( %m?>973>}5sAc~Ʈewq/ݽ@~|:ϪY?6{Db0 mX*Zn}ji1iD]KmQV@_pUR+9D]˝!Tv-A0j'?mU2~V&ڰ}x[vIrdo{a7#bB.d Yw9qh5, FbvrQru.87Lj22Ũݯ2s6i_,{uY#Zu YyǵWMT %`Ane.79\}C|-Ne ~Y ̨o~dXy-Дݹ\=ovgfyWSuZmXufv!*{C,Ohqf!"Ań8a8~zd'Ax1EM E!'=x{n/h n/ ~<0 (P9bVj40o<"AD4:AH(EgW'|SB_|K,-CO2N[~X ema Gzg8D5O^]sp)(5gyyt4bO#q-wNq9 -K!qɣ{BwT5eTQb1)D; &80u fG2,b#̻ cblGWͧwM d9ӻUBmŶrv|pTQnCTMYA\3]'ϝ9x` "֞B'6H}o!e}ܺn[.~\띣I]c%*/{^kWW#qA u8G?ko'Pb/]KTQx_hcOzꇋcv!{[orRdEdV!K|87Țjjnu& ;׷G;kNrOyУ|GYy]O S-c8mz>I:i?{SVuJ@3cc Iǜ1LSҹom5F [3q9imfe\g8(<Θ&4RB@*QhN$>=noYpR/bPFG' &_T;A(8%IJ>7 QHc,ȸE"$%>iTx, Ut}b8+rv&}mv+zipBAg+MA=0-~onb)m1 BwAǛ8.Ǐ6@pQU={7oю|LMsw9;z2B3>Cvh2=z!HA0lwy0ߵ::*)x#|fmnnSqecdCd*ʑ~< xǴS ^5_j+#&_Odg?-X޹WDӻٓfRGT~~(j ?LDb7mMz~B#H'sI(˭ַ~q\gyjMyvZ Fy+"+G>wƏ|}HMT T($؊)7X(Exb҇UL$>ABz`T Km>FV9<h/S`ԉyeSHj /ϣ_)$P>iC37|K99S*OxY06Ц()qԔH%BYh2[DQdw/i6L8kɁ^`9Q˫դѬKW7?tm]~h.vQ~EXݍB5:oK!:kaZ|6UhQs*'jRڬ+~f@^%fndqt8a=^,-~-W[5a.>?dͨaZRw==uK$G;K6iRU6EV\ԶUNal',Ld3#TE"#>aXvy2*Ӹܴ煇봔& ) Ȧq g\'-cTa ԁtKFrn/HR[,xI&TI~qx3q4@_fJ]˭ɂ'@SH7W+ JgvH4ɆVy 0X؆wd J8MYf>K")8d Uyx*'D)g QKL}VB2%ұ[R KsV妬G[l\ȥ4S5sM\os mqN|w ZK)je"%]b6^kѦd[/k4Kc8W[1և*P3u-@RJk]be{]&4af`OX[ :TӨŨB _{ wx3S"%6.M\Gp#`%cM;F2U0 KÄc,cU6F 2k*lQvN!lx wI%z`9>ڿdV3Ul`u*,Jv>BnO鍔*ܽOϛ&he7-GpN#ϱU1Z #ɍK6J#Lk|*.6t&$K k,I{mj#rkHF 9yR E,/R>Yc `ѕJbP=w|Q]Zu;!tFzY.MVDȔKK(<46{iMNw` BF`^vylX "(e*9N/%~,"Z,X]8 Fl eeX. qai Y^@f4giF#/+ FW,)u6rEiYRTt6bl@ѦjmzfAydS뜱H)%X'-]Ye;S5!c4)Llp`_&t>? $鴤KP mvYDo{j@B]:q +Z#$d6 a."1K"fGeB쨘:?%Q TjM\I1Ej>dt jX? `ɦ= pܬ@{Sm*8(Q.( ]YR;KQв/uu$()ͳìtWlr«q݋.׽6,1 *EKRtR( BDA e͒fTYk|Hܷ2l=x_ѵ?VqCi@(M&jZ#T^SehX J捥=Z6%rYƍ,/< o 1tt¢Hpџh:F9mG7j%eDw<}Xw+i:g܈:Y.lG`Db=ѠexA!Axh&! 5y]%B*q`(#GQp}  _LCS{pcUۚBh @ؚb}PVv%m1E1(;VF/8CσvFD pd+. Ƣ΢I`eҪ*QZ|M< @!Gho<:"4p}&գ-19mp[ܢhHkѪTVm 5R,LсX \Vcҵ-9V/{*\+÷S3?xs9$jawXU|di#bU!-`)o"ae ̀z12=l[."WAۥ10r:f-Ф3V|IAy>uŲK,\PkBE;:w뉟Xx;jd|J[ P DŽ^|/qruPRL7D8L4j#.S 9m^9Ȇ{}`!g'B6uBF(*Ṵ| +I9R7wT'bm{x*OPu*1LmDI }ww!Q?>]\{g^ܕ qǁWid{.xBQ~sלaڿyߎ Lfxک|[TdrYoߧ߇Ty/gz5y>ZPh먣LFC{cAU_G߽6nSV!}܏%,8c1sFKʏ2!~To^@Fآ63b/' chVa V凜Kt3.-k;: XھyOO$~`b3ۻ y-XyP[I-W1 C(~PC\"\k%H"l8r-iF1QlU@zZ!^ȥ D{yqCLbLb;%O[iiiOiaO3m M:ϫP<؊]&>a4vs8cWan)Ǯ2[0v'),~6lfӱ24QjnFr1|/ĞU|K%}=#Hs29lNNP$w}NedK*aj3.޾7 9v?}u!2 Ԉ6UߘO7Kvjc+]'7!-1b6\ֺ̊Ecei b M"r"EQ(P!]ȧ>]MngU,G [UKb5^cLˊc nэڎn 8+ca5SV!R{ t' Ͻ#X{>?v6W[-.هshbr3H^AO/0H 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Z&Ƈ T>PmK V&/^`kL  Q@&0 $L a H@&0 $L a H@&0 $L a H@&0 $L a H@&0 $L a H@&0 $L吘@08a N݋gJ+Q"ܠa-%~QDQ~FI8o.^z8lF,η BjBa/eTL1q h¼:Hヌ ~2ٷJ՗_֛ c/HTo^6yrs[ݛw/ߑodonGjrSkMZW&6%}%69wm?⍋O~7XT~~T{ fd9 ¦O^XDqzkݷqL/k_K^<^׎vf$ڍ|Q_4VZ8¹εps-k\ Z8¹εps-k\ Z8¹εps-k\ Z8¹εps-k\ Z8¹εps-k\ Z8¹εpsz9֫xPk sp-?u{kXIA8ׯQ}NF@&0 $L a H@&0 $L a H@&0 $L a H@&0 $L a H@&0 $L a H@&0 $LyZh}0?nGMw4vןNWݘG@Nq D!.\k}%XIVKQ,O'՞ H8Kv Ѩѿ_@8 tW.kU&`4 H 81(‘ߕpM:xe."sz9l#Iw[RchGh"% ˎ&rE@)*5i `T˥J~$R%:թPd$d _ S3?~Ai%mupЩ2tk6oSNZR(ϋO|:0ʛCBQIYwpTrUڹȘQ026qjO#7K8sP9bAhp. D KA$Ȁ`)Csswz:: ۳sijC3\X!ː-Zfy:Q \%'U 'o}P&1Pyeљ}yϟ)$/=;ObOSU}/kt__z˼t2IO%Are@0'rӗ5I-y̒@q̚(xN-*Ck (fxQy P`e-+/YL8LFBM 5STSII Rd,5C6s!r!PDGC4QTBʞHBnJ^#kꄲհ>X{6KRlSʥr\ ɒ{\%3}"sy9;?UVOMՒW"miT@Y9WgA%NBAQ%Rz3 LD%v;p8ffk{~VYVݗ;ŭV3G;3%kbJlǍoYh[жu=-@LQVXѼg;1߀x`"FO1 e:*b}Bu -b9sd=GwmQ;xZ 6%\,=43Qc40Q*Q!;נr=xhf qqFw^}a5kg] 'bg˩7>`b ϶=Svç&a mb^ΥU:2=uVw s6N_@p4d:zHAgŘݞLvSGIfzb'0P?y~Cx=L"L<]Eg%YzXcχM:-}w;HބSPK[_7Tіہ--˕Vj'U6Lz7y`K)Ѥ}T0&?P/&ZA8: ]^<t)8YHNd\ȚȭAhZJM)fMT cxEy'X0^5Ckl1Ɩ?)f m}N*Bz<ݼc:?rh1݋ryl= yj֘%ȚL-]<ۺ{NS͠S.tI˩$JSeMJCn Ksg>uu٫ꊲWUCgg LKөiVsPBmi[iH7&"XJEC 5Ycͦ5^tߤ7O }I^m쪏O] ">i+ Զ|LJlŎbCRWAe+\ڤ: )r¦̩\:ENYO]Fej\<1p((}ꁫҴW8;I6,+-&H+)#)4zZlxԧqtiQkH) 4OM;Vy\ՌA uɘe 8JPvP.`YE]+NG4b82l {߲'f,Э!mV˄UL$`g㔗`v66 ЫX^XXQyE#9sISD2fS/go{5׫qAfJ+S6-qVښT2 zbƫǧЀ)YI:X- {RkJbJJ3of,Lژs\zK& 30`X.GX/8)1$ dX*}z厕sPJp~֙Vxޙdwcl;9RɒtКsm.,dS,$uiARoԁhvTV"JYP–3A` aQxץoejnMo'+tQ^C1jҖ+GEkj7UڨV^[S!VZ[8j@OYzI3-PL%LTM6+ZqV2_ZAjV):)xlѶT> 딍,j//0ɦZ *٦k^G(րCSMvm}_;~:LFsPק|Bg Ô% 0}yHCGGa_W/f {4 p?t>!E<3Z P{w%ώePQ@G{$"_N%!^K)(S/VBeWmopI=LC=} N?.m&U|#1֙DTeFy,{LP~7wF DUO6t}GfEݑ GJ!x(@Dj,l^.*Xn?fG0L'#_QX8z뭢Q?̲;xү 0jh gp몁YdIUo>?}ʴժ18 jL!Tr~]zouhӟ.{)WgK4o DP.&M4+2%RREXOJZF0&j.ۡFx (cΡ}S L C2df9 9Ƞ1BgE<4z, /gIgvuQ]F#PW}܋a%8}}jjoob햳·nut!Yqą4 )q~SjJATpxC#\Yk۝^ڲHؙHZKaB(o1wmˍHW<%$:cffbc;&41-dW77AQ|2d6r 8@bIDR-kT@,Ae0 U١T ) ҺYVʗQf:C*ӹC%{aa2NV:!Y恅=zQпpޮyb[{'Do&~+n$O@1C1mI>[bNo':*}`&"@6ACIˉMԉ5$u9= m4qQoұQ f<(̻\Ϧߵ0{r\Mp-.h˯@}F>z%y 102,9\F:sap C8FC2Ң=^Zw}W~w9ys׋UH߮Jҵ٢E1vmGlز7nzji_2y fKʪWxXYGS{ %N{b{9vʗj$0sꅗ'B7|l.qQ74wߝM{tgWĞ3'ROsGmKuw=720vw,Q)e7e2<ݎ-^wRT;ױk.x ŢÊ[i'e5%Ҋ."…HM,)@r+y2s2y}Җ7ҖxzmGWMUNS(OCP3KFT'łX+O Aǘ96>'ېiGr̐'PC-GPJМ%R3 {f!lc &().Ouˌۣ1vJ^c2Y[Ƭ Kl <8m$ tY짙ņHxO**4\u++Mލ+[j_$d %MƸQ0Qt'#8k]RgMRPr!wG;G9@$IA0ȼ0qq-2|D1ilU94B m=%0AXd ,:rfLɠ>I欬j9gS=fYĐDآ,y%z]~R-9Lg,gZie -(3s'![|6VRdY,sA&LP4\ܣ:ǒsZըVX$Pn?ߋx=R,dQzidGK#hipT1Z'>0v ɀ:KZvNùb eBX*PX0F#hVoFVRh󾀯Z9|2ViGjo@Fj3OdZɺY*4g:,PBp)?74;Q%qK`d h+ktpnAfvȨ-Zg̨;92B< "HRU\ba#|u3Lfym+֯NU|vZaw$]^A{ݝ;gdeYPqO20-6FM_rbB\r䩮,W٣wV'+Z]Ȳ'qcPAKGʁ՞V~g}ϦE6>n5$D1]ŧh~Yɳ4TdIotŏ)-m4TO ~xP$/׺_~^:_ytKDv'>:BeAtߖn?Rҹ;᮵ G.Wԏ7;dՇZ]l2_ߟ?ђ|\yQ&7cO#ꍣl]lgz֟J|\nuJqe:#4`jP0hfeS:A2 %b>,ԂGB( u>r  HLPEdY]fFf)EȮ91%Dx A+Q&]:f^=p;ހuH2M*Z}}Ve!j-# +E4=TPpQU }5mС&P֐JpKGVuGJA),'Nl82쩶{+ , $ >Y7҉&=|ҁfeB1ULĪǢb:$FB8f87 zt 4ٹlE4*Fnj/Uˣqvf~7 ˜n 2A<-1!"k\2R:ԩxFzY %K_'^ؿ*;&Ջ@ Ů eaqʪ^HQ,&h6LC:Y0X'{4|$NTaYiGŁ!e 2SL,h !pg4b2@ؓc>,rƴI6kC |!餹 C ".Hg8ؖiia'EEwtiwEE {h6|SHhE GI:RՍu17%DBb>.g;{ {={vÖG<.6ٖv`ZgbZ;'A>R &}O˘I|}CztVNa]u *(\or$l㕷M oNrvwy()HSFkd@gyF03WeDk%E#s\:Nu г~_Ђ-,vͶ0yyO˨5H$ڀʐcrDi3V "9cd:0t.gZ7" ^ha !@C֎,iD3&eQeJ`bwtcg^뢃f3Ir7LfvɇAL~ŴYeX^J'q`FȲ7nZei|n\MuF9an7'$hAT k(Y~py \w |\[YwU@qbPl1QSTj&NRzJmlB4/ !mۄK=.d}NʵIʭ;I ULNәl"ʣx`D209pݑI*uzwli&c:jxds5T;/:6~?V=dU&GHqhly\W,x14k*(DDIg}j})ϲL@jq8xr4}lwjl)0a 352؈Dj\)+4G)jbٮ ]#0{&lSq` PnljjuޝB z݅ڟ)^"Dy1P kt3a3}Tv-ttls9Ч[t*}.ZO@45g09pqIX1ED$2n-KӨ}p O0qhDBJJ0X6pDBJAY&D@H˙eFHwa31r25E5Gd+~i/8Nٝq[ahSj2G$_>$'crrsGI).:ګb9QʔRA k+zE3 q0Q!wK#Y!6e7 VcQ-cMQ +dҋTJ( JBTO"D,Ј `ѯRQ 6Pd4mg' \l¨)Xt D`Iʊ?"T!A6j# .6YH&!%g,\Ԅ(5:PA#|͎a410*DDcۙZZn6l3ݭk{/P\q͜tk<܃TA05{E `DX=:y2&;шh]cD Ekr3vyTYbzSo< Ϣ~0Q㡇p6HrL3LD"K>}j.1Nሺ G9ـP6D挦h_jS^ 0*p慵h&;ִֶ%\JlB Oc*, | \KXX/5N1TxDJNcRɜRafXTqX"Pjw :욨>/kv?) U lhʥ{_r<+(%ے%]MW,sfEZ+=ڇЗn"MaN1'mƟ ca@zk܇( |b\5 {J)R׵`SG_ܕkCQI4ṵ7so:y*G$%m^ݬ3IS +XMa]JA)y/W Uꬺ6 #:mqkrTa3 T\rX$'¨pb,nxQGL|O=1(ZUųax}VWk~ʫq}E"T%ˍJmi0K}Wx]X6D`fdw`~$4M93E:yy?|HO]23~zu~݁]!}bvHs.V)^dƱ7nZ`9/CD[|8C; <& Fg8b:ԫqӾCQ lM9X #j"Z"KP=XF4ZN=H͍8[O|%ه/>lSLنC]v2 l\.7`g0aQϟ{LJ9y h4La{?>)4`%g0wYzMBΣW-ek4d@W eB"$n"I6s13+9x L4:+k%O}J*G}=&b2' ~y '*}3fco6VHF䉔R5^J.(V_Lw"K Nx'*EAs@/.Wjq~; $Six`,f(z e2+g ˣaW;oQ~qQ[js3tsޱ+yЫ_fFYE "4XkCx!0YlXq? 2+eđzFM5*Czx Y@SJx->cmس9{T!^H#X0+7ܶT3JSW_ݳL]+2 O^>c?R׺O]rT?' NX)Gx(h ՈcBkf8L#ruPcPn&x9iѦc, ƸcREd[KƦ+5Fn H쉄nԃ'8ɋS{&G}.gkc"9l]cLE5YK[!zΤ\"j*uZ!5׈W_ӛgeS؂ {pM&[J^抰 #Hkt1a@. F%aJ0F)7?OٗѢ8܎|QWι%kDR9j@D5]ƌ6>f:̫"8rp/z>{è;\~HnO #D< gx'0KI^2Le rzKk/4 t4=mf} % tj: zwDI ½ຩ/чӨ$[ijlf gOf0{⡩{paw:UpڬSeX׊#Q!+I`se#sŏfi`0̭"Z(+▱٭ nx t385f!Ynu '1bePUY] f^l-`ĵ r B($bbZ1gz4‚SwV3s ̬z'~_5dٓm1a[5w&SO)Mm$l$rOzRU[0Z-D܂HK*#pʧz8Vf,@#X;3Xe^hfd7D<;szR҃;YzOp|{T]NTٳM!?<}tOPI#z!Uxd!R&R1тFQ4`pH~./|Isq%A.9 v{ wd6o}a?0H>څ4ǘ_}覼|ՀYen:ܴ7XH>+1E4^JJ8@ X,-,xGI4 GMR@7R`Y~sw՘j| Zb{7 լz=ּe{pY27 fsme!W3ʼnA\DOS K﫽P;I"E*iy`w Ȇm2.aj )Yrmrk.Nmm%z~5|s3IUF4?2MEZɁ LB>VPTiCQϧmsL6̪IrAfyeZfEȎΠU)$/DEI=͗-E/fMeh:鬯Cj:4nO٨s|UVT8 9N\j0WyiB%j0‰L 8B$c8SHj^ϙ/0!kUEx㠊OC^_uA4Z/ΛEOoϢRH^wsIP-V 5j\@yIE'ʞ ۲cIY4aV+vq==Xf"4I+Sk>5O9;k,֨D4_}`}V~S Ycp3)Vc춤mhR!);zx0/Q#G͞:~U]]33&Z@mq;[TV(mBUr ]I7 5ȝ= <=:5䘬7V,f2Z7k.9oAn/Vz4=̈.}9/ycJylئ,w[3hmq5.~%ZȺ~[ĭ\?Q5[$׫Q))Lօd;*-VM/I\{9;k ?Q ]ʕʛY y@2(,h/N̋╓JZ,9vAZ'dђ$. l ڐ:SZ2g` ֌g}Dض#AE"""*V֊tB"Х|Z/`c!iBq$ee }jCTzĢ2f\ *PF[$= j_S',~;\4*Ibb&X` ,#pT0+NQuβ]{ލ1ӌH=V7!cHHF!s&@F7&S/mEDInj2?_g,r𹎅-oU%@E92  hC41$^<kX39T:co1z55i8p"4աɷT|]15d'P )+^M?J.ϡR+9T*9|Fsi5%5ON|9)0F̘-k|AgP/}2ϗpT?!= O2R"D1VdCpk'/vȤq| H x HQ1(eMYrCrj];]g6=!݌ߝnx,wfWlt5u"B_,mPĠ vu!:0d&0^r2Ǧ/~&YzaxrsHY:gTڡt:IHKDf 1[e8mw? *-CS y(o{ }A (E YAz,*'k֞_ףssxBel#$)gIŐ laB _zAV٢|債aL'Zu4KbH@ebX`1laC!iޛfKױ" UNZ/sZ\i=XL4wVmiq3r6DžksɿP /I}_Yq6L_o{;hrX;kW3auoWzzF_`'/e=|^z =6ZMgtfV,St+ gtY ޶P(PckˋF] %LR7"uF}^%(!O.;gOt?tfIRxo` ˺D 9,pדJVk^~z"f-S}z?}b17W|xZ9ҷ>Ͳgg[x6V^cԂu~YrSu~ n.jGWb!"_2203q0yr԰q*8V_[tuiF[|ff;Zw6_Y밭ȃXbQm<&Z7,dc1\P0u>b(1"͢ dah(Eޞ Z;RG>Lh(c6YrBEm*izJE6wl>x+I>$StMzZ:O/Ӵ'huTMrq/?~k-2o,HASYo5`n@p n߰g +>6$HS t2W7'%"AAiɸS1``UE*_P`eN%cV5g,j1ZM1ݬtQy. yD~4!oӌFWf]̴w-`sPV!zTϣ[t_#h[`u(7kt9e tBB$( |W D L*E D&c {,G{ h|3>2R-B_>q_uHoǂS(0}Cw%N7%_][sqS ^1J֯F%טrLRıӨT"4cDZw_8s7nx k./Hxl)X(I)4:gx'bc%䋵Ҁc>~(hޙM9]!eDɠRR`%&YKQhNYe H|N*RqXMCBXd=zt Un9Y }DSfő):)N?;|E0uo7:71#~Y)~rVd! ?u=]@N6W:m$j >FlAIϵIϽI϶I@XQAY NF Xlt0ڂW'yz\ iJoi/oZdvyˡ}Xo{B~=[fKA@^O>>u+*F[R YPzàT:䛎}_M j]BeNkVD:+b00Cd¦( YuyZ NsQ!e_Tbu- wZf*&z_u;#gcs9y}Lǡr^N?.ۛݝ[Bw?bj,lsC=]| QlQ`QD+!fTVHe/{)5(=~E[/0x%Ҟ5=!ddVyEņ;#gY_OYtrqޫ>p%ph-~EYgIDU 4\z-ifZg=͌tBif`Zޤ@؝lgv 279$}F3zLND4.'J1G袣K1Gb&"qpd $o(B,QoH[M bMv9)9vC>s=1>d-O;v;#g@\7 AlrV0P{gt^}˧~ֻ1ʤozjOx,AY1 ?rι줳/[IEnKV4mۧ^XKyG7<:>=/hc՘>f;!lxdݶ_|3x geo5Vyc73ޯ tċPMp !2c+\rO /kgUPQ֥byV%D)d SCRUIA/4IKvKKH(>h*.zkˬO-IZ"ZE*%XϲLݵ E`AZ9ȀU@,9 dNl8tP9J{d2 UYxW“UI݈-y Y-1V[ulr9-%g eH 9($7}cl2~^֕+q>F.l{nh)OPl78,e𨫟?FCܺVC=OV8~<} BOh0qb ^%'O׮Ug%st1mhkTi_[pw3P_8jR쎗fo,GoRs2}vr]/9P;j2wʗo5TO{2Hu硜}8GJbbݠU!ɤJ]^Ƙ"jzX4H}ٶJ %4R)AP|HRNlLI\D@ aKmK_F,^]n댜ݡu։qU=ǦU)㏣dlًʋjrxHN,+mH 1_bcľZ:=>ؓO>5ܡHI {)%)R c؀MQ1qF!)@ְ٧kkش1oߺrv \N9y$ay]|?' <dI$_  $dnMNwpp4oXI9w:Ǚ.Csfp4&ED &+Ře!qGy h&nYdJPa sE(6h+*G32%Ϳh> Gsy.9&K.5W:||0l7sɝLruOB4HӁK"ϪBQ {`IgWF+1,X'u@F@ekhXRQΨ@sE!6ʐD91YHb1(T IņGV-$e iY^xpݽ didញe&&I~g/T~&T0f;8+pJL -1&1jQD6يY`-xI:Q %$h@LG` x伌 AX Tl1ba%DiuSF[ 2\pDTQQ#x8[Mg΋p "EZc"Y#(Y'<\)KwGu )5ѢŔW?$8sx_IyU2VF!Re,@%`+ 攵ӆ'qKqEC|M !8jK$ߌSNkM<^ 9V[eH/$aWU4{2j#/51k4'&uzІ;Aw ֘;1vn!9^2B(8mҊ*&RL IY=V:B =@8b{ѳ(PjO\ȂD Bf JQ}3Έx@QKIznl $IT2IPDRBHSV$O914tkCWw8+C0[4IOU\A[+ #YGbJ4ݪ9ƿpL* j3ُ_ h&\Ϛ̽ϸBޠϨIM@}PD\2*oI$ znJoq'{㼣:~jgGg7 R괬02 G)hӟ/~ǕЄ,c@R~Ԯq.}UucEo3״pGDͪ0Z J_Н?mBynt S"[&_t~9P/ю<2Cyj&=&6u\v |ZhfWO%w?u|?Q]ve㸓lgפkޗrV|3LHWȍ4nmZ]mp|t]˵ue"VTT OL3Ub2(aNGF5Qn洽H^HS]j[[ͭ64]uͣJñjj_i{nގ^ޗd nO  BmNT:iKfjGF}F?l%yc̉Z:tS`egL&Gθn%9r(\'|~9DO59Nߍ֩&ܡ|vsE,bnSnZ)OIXZ:`@im['G2mzFw5{6wr70s |wq5+_~yP-MgֲvX\\(BT$HWV'UIX jo҄~|e8Z߯ë,/jl[ {/5wBgLe{q|} usPMB&B4DQ -H! X)oeuߜ|==yď{u\G{iҝ|o3v-׾\dO#s~q%^Yhe*@;uȄ˟')5jhۛi 1l>&D˧e\-0O˺E\uO O [P{|W0m }W EA=HV{nżd!i[eL7/oT-;'.o9:6qGRfI p{[PN % ~8|ĽeD8m:6h;ǛFbef⑇d#ܰ@[{GEcNMdF&0c~\7M57dU6d|UʪMgϫ:}Ӏܿzf9M&S}ΫY,眷է ws4I'"TNKZHsFh5-#ݲguߧf.{ ;OtM G#uJoپX`z>uN,?n|<2dƫ~==*iw6wՓtsFOjJ23t/# _RC0sUbUwh)* + }$Pefc% uX/xOKyskޟгW$tQ{ǖw~qpPT Q!U=dSL]mc  KOGnr'g]bݹh[r΢t)r =Kt߬~XMf,7#ᄚ8|EbA7G> ZϹԇ5 ,cr ȼMr) 0P귔Q:ޱ\ }ɕp|+)2tX6A3yMm8<* @o%Q0ҹɄW 4NEAg^}P63DΣq!0Y 4GÄӥÏ2ש gITgS.%)Q Gy=_L l?Zjy髹N:?~LaM/r}CY(._ކ\x ZivAEo5FF'^--)9[u VGψH4JnP R@\\E%\8$Yg{(cJ3 •,IĨA)9Y쉳Wͣj~,EJHt,qp*#S` %cLb%4V *7yZB ރO06LȥDOB<* x +uJZU3"aZ;y%R4kσ"p'JLh045XAfeeEYԐgAjրӨZZVzM$0*!(It1<E[9f=9#C O7x<Аh$kNK4łr^zo(2V۹(xNЛIM +U"",DjYnj2І&S"|Fjp{E\* xhݠQ}@#ST?s ixV+]7Ư^(ho|7w/?w> AY1vzkm,7]pw ,He,ݎՒGr_[|N]P>yŸ}Kooo*0ywCATAr&hIt%~F<꾼ĺח\ 8Sp+4u# D FeiWZpI [ aP"`'&x% 5G$P/DBޗZEb҆x (<`50 Fgօ=Bgb㐶|V$/0%6+Ͱ<;س~@A~]`V8=xB8ܕ05  -+ LŤ|Lj!04U @|"[>|!D eYbpZ!YɄXMςYAS(*YNsQbʡDRK[V-s$Nl B(7F-<x9#Uh~{;<awelӓu3GLqڴ^EC=]|ض *E M5Ƿ UF$%◃hZ(QK-C/LBM>dY ,b%᜔ Ty+٭kq 㚼l:\|vUՅD7 #[xE,Y Ejҹ)~)w NqE#w_b=_(i@}ue 1!AH48q;NTN4c*E8:muE`<(RI: #ښ`aSr9[S$p |)V: YxJuIE(aU&ښ81Mn&Ξm~ N[!wx]GEf<^W?ߣŖ!wnΪ;>+m;w=ܘ>WȭMFx7v*ou&?KϪ5kk6m{dG6oͮCjɨ%6?|xw3GK->1 mw1qO㙫޹}@rYa4ß7Rn [mFp5NCY\bn+\']y)o>e {.F,"nXT20+t[&H]T#uEj[}pVB zD+MTJ"*'BLcSz*.bJe}"s!yQȱ "P:R8)yEC j1F4ZsS㶉n.#Њ$eJ>$Rڒ$= u& TZ(땷K]bFtDjC&;Vo *~vf sMl`o^&9\.ҚuiX$|D5%' i8 ;τ )|@Ĵ k\SM>r< c0r}gCf 9|ˇL <9'YR3a fr.rcs|_x9.JםSesFLq6 E L 40S!3p@ MdJhy)h;j;m |^`8A= ȼ$ET#eVUv4oKW^ ۸@TQZW|IiU"*H1 o K)U{k/R ܤ.kh /Y}qu`\&{jVuy*%EXϒ|g݅ ԇp\Q@ XKGAN =1'\ xIiP>);}cmnCm],=t-aꂻ.U9{\E,b @$ؙsFTBl& î6X ʭ _8iͿKZ84HAj0?dë;jW/dSbZ6MrgM~n~ G_.TqsŜ-+}pw>'2}*kEH跴ۺ %#"#_wU\/%]VsOpW4Op Ѧzt4'8}/]Ho]. l?p')ܾz#8fg"[ is|4'XG]/ u.p%`%%\1O:eRE%О kTI=j~\q/f͋=bl; ;u f|~.n6mnz.ȍJ3W[3j{g$%(ޟ3grgk.(<}po?ߗo[J}jͼZ TVX +Iys5rր9WFp͹%I ^9҃3Z:Z@jS߾ b٤Ol̻&7vM/7SGO1kI)gMdG4ۼ܍}ǃS)DjN:g3YhaK%M~ EA/ZtAf0A)&b|9iĩMuu8&<,;w!ӂV57Οy\0?hNt Yڗ|h&[WDm]Z9/:<;-+- ׶jɊ =8*tNϦSd>mzz8fwt=ۅ]^{!6M]ԒQy 6?|xw3$GK-h=cgUbE=g8q4|XtQo8Xϛ{Aq}Ԇyԧ0L7_]Cv̭w?S߃9aۻ"4/y`h>4ə[&ӁGC-:GkGCGq٣G֞QQY҂Oz&JW aPH^"`ڒRe3Pʇx1ĘT.VkYU 4g7)}hΉat㐶Mhe` 0%6Cz;C`xo=yBf?.0m].8\iZW[IX]jU= jlqбS[gןűA 2Ax%5"LM3&gAGƬ)BZF PKMѹ(1PT"s-Bz+9'!#fyl1xK8÷s_ 㰻YIݎu3§6K.'gT|y"Rd."{&*V|J6LRjD /I0%B f+ BV:e1 sR*RuV8q=(r.>XzNs4_)^ͼg,#wAU`)_sr{)UZ=RJiH)Z}Kq|}>_ja 'T+/ڧhE6?{FwNyApvkl3EGX3_.o|VI%qV7B i퐬jeTJJG%XG'X%Xe! BhXq^ ^ʂeu^FwZKUfH+]4*!6I;(t̽;.6 Sf2gCk1/_$8?XٟO9NT-X2 $xvdkSkE|j( ]bo?von)ϫCvl|O2&3voc=t`低KLaW|/?_P?$`mu'^Y2awk @B=m UۚVlx ~r1~wHOo(|v~[Xq(Pf!xU,1ڤ-+Z QDP>'IT JV~vĭqn%Y"И'Nj7qvC]/ʸd -׏{mv?DOlO6'wR'+Ē Ñ2wƱUYl)ڽ GyCC$֮8=V9vH2הTD6mLyאDT+D_5ڨXHmnV[&[/[͗0]Ko*nz/׋nZumq̄ڨu.DhR.$jĨUvd+ch_ )kٴ3\ 쫸{B5 *`ͥu(zbs.bid'{ŻG9 0C Eg[.*QBL\+YT|]MpԯwSaEoE4E,bcH+m-fLPP6z S߰vԡ iFF@ѱQlYmDS"فɬuLCM& l{b[%4)Z%:.//.:ׇN7drrXүǜ*ut#:uSuW3jL_- f{xXc`<yh>ۧ_%ȾnzM>"Z5H4(7BqH*j3Qiag}:TJQztTR5* #~JN X 27{yr9pW͋{ٗ<Vݮ>~GvCw.;: ;X߿`gr.! ӟ܁qB/fS&p@!z4 3VD?Bk*T[כ؞ʾ$Kvx>c:U ;VhaHޠ1,5֨FQކicنXXUR[B*,WI"դ蕭Iug7; =<#on(ѿ-p>tx6AodK|L@ӶHGhRCD\ձ4i =5IiԐ15D{B?5v6ܻŒ23mL2BVSeYi`&`J$[4¬(ƪBynV}:<__ݬ۞rTթ|uR,xXj{31`s@Wk @R:o׶?_nH+E0m7nfoes`wަS(ok im3`5UceE >bk[3rtݣQ>{]2ctXRT(1[$=M7qv͆hjS]r^k&cX!hYϟ-b-S(#3E:$o}bmԺCJ[hnBn7n>rO8;U*'Qg%U7\yeK_E!j4UiRh5De}芳jXA K'^ &[ŜMu* 1bqMlS(j+,A L) `Ia QBF']HY =!pPNX5KCΉW--T BVdlNFň';FY.gmr,+&`<fQ0X_%eͮpt1|o|=3_. dF8k5Fmh26J{Je]*d7+aB[ BXME >|~#0y,@[.!rRVWȧTH9*JO;26˫ocAG7 y+:3:g ~5NjtNx>i.wۥSD%ȭܸƑ=dfRRJ@1r60P!@?ĪphQKqaJ@&߇ 8xJP6*Ju1ߵ XCb‱Ͱ ʉg7+6# 8u,E؂6Q 'Rb\%AUެ8/^~gMb?F醯Yt^(qHV/AfR9Yx: }Ib YjR,/̅jylւ<&M5ǧf9qyoW7˺(]9|=? q\hBكUqg||[TܕΖ&M  'g_[ (e.R%rj7)dh(UZ8?QwgL_/x1~e֪w[ܜ\~]aZXx[S;U}s寻"J6E{*:էw?{WGJ=b:%A2Bh "L{w/d~ѽ(jڒG[:vc'V__p!!o_gb}=hO#{{,*w (7r8xm 1bU¬˙aѦ @W joZ Z =#t FyG6z49_4y fʂ}Ҍ$19[fehlؼWxw1Ep0H< e|6w|9LEv@uhtΫZ4F K\R}9:&V:0wQ"Sk5C%\/J%VYLbQZ$8$jaFTNgYΤDM+"} ~зV.ΧӞ/7 eh)Ju5VQ$PHZE^09 DQ Fih/!))jtEz2:%o{U1+X T6݂Po)Yɸ(sI%Єm|C"b!&rѾ(W_8 YZmIp+cuzkY |nl!* d4%ti_kߠ+7݂6VۍA9́0WTz[LּmNm{6Jc7XzzX!4dG8uU _Ŵ%Y96XU:A2@tcS[4sգ{݈zJ{OYkh:h΢%$Et&`䷜WXlɐ+B.PĄ͚v\\ 6)|?$_MK$hJy;Mr^EN9p0\69'TKve~vq#CXecݡ|>WS AmCe]6YzV:j/ڸgT^᭟v.= :";T;hdMV&oLPkVxݓ"W = |e%y".U66*|t,Yl킘QyP.2|3lvه-g6Ku\yٷ97tC!4 ![(a3+գC[Y¢WFyW{E>\ͦwAԼ}=Q ݫ\C4[߼~h"WՆkpwW%=\TPfpsT.t,#N;X\yb a6-Hγȝ.DPGBBjt:˫ϞXx? ^I&7>G=;yy=vߠ[b]50&@TrL4k# {wbԱH6Ku_{sW 띫8ߞη'_[lij8ֳ$u+1Ԭ@zoL4XRmҶ(hشQ\UT J?H+evU4Sn&̐oM Omj 4N.]ʗm_}|tNT\: > bGmFB^B[}6פPo?R|!/fɸB\ /DbJĘ\("2E,ڱ 9K*&_C-EPM=1R1qZ!.^4A" wQSeV7璅-kiywܟO54ϋ|/Otzqz:_|[ \6*EQ-y8QmRjVeh_s\۔5MlZyr J)j@p`=p|V}ڃQ(G*!\lb8!~RiJA@vxdM4,PhrYSXe/*#f/[EbAJj}2WM0D>*ьqC+B j+;ږ6J`s%@1h&ejM#9b~szn{{/[~K7nx1^8zkduXDv1Gu*'^D#gU?L.S`^]:m6V&|W :eM^&PUgMDRTk AtG][ is-1ƪcՈ{FMvc5>^"ۛ0_a:φaS^O}R0:t fpԝd*C'XC +KU㚪oR`Qw$WkZEKr%=*/ Dcl(,<]vPmA Dl-#k}IqZ'?bݵk0 3y-Rk0}weۆvGBPi+Jo' 7גDFKjߓeF1 &VixL9WRX."ʦ-OCCHcǧZPN#8[2_׊XJʶm+!b0V4ے/$=9[EܚBgR9m216EDvu]k /zoD F"Oe00?=2e)- Vͪ` S,IY]ÐRLKD9J1{l^ՓB80w[4^ƚtwtM ۻU{#^p7ҝt=hs뀄16ebMY:-/9TQKʄ.Vfű <1e}/BcʎJyL\!G.)VAO)hFfU,Y8 )ZǾqd#K%TQPy`;4'kxz+te(q*ąɗp.r::@CI9r1FvQU &{xNby]^[۶t痛ClD5C@ PH2A8e޺At{9p/X~ Jg )/ϧbWΎTR*)kt"VP1uOJ$H/BO+͜ x˼qNg?L] _N|^$.{WxTis¿~stu:-K xr5kٱr‹> +_H_ޖb~Njh~3ϑ,bkc<뭔q}j~>$n~hI9"_L{t旖%o9//:sbX7YpQԽfv)m9P'[m>9駃BR:^Q3%R"X.do |<ͿH&'xB<aߓQE췊=m6IŋZlMс%4nBKVϾVɒ]3WmTJ;B%~%0{ EM[hc4SdyCCF- >,a{=y'RGwM,G\Op|U: Vq^\r k );;t>V\48)mBrg<hd֒}H3La;3G@-^y퍑:,u }brvI7X+f|uQkzEh}Un w"nmzrlщ#M"$Bb j:aҘD-""Vwdzf޼=Vآ׆< }RR3,tZ(2\ڴCGIzߞ{xIdВZ#%DK}g6f}XPJb'L jmm oWYNd0 UYY@xu) m:@W[D Ywц,*;ۡ.8 ox[iҞĺ<j,[/#6s'ݚLR??V06V"f]ԉu*3If8/r,ЎelabOT鮈eTta܍//ˆ֭w#Ͻ}b"KmM|iл E>]c-%(A %G2QBg.NP&йkmJ{Vk+E[k9K_ )ys77JfYNa^?:eNoRZ_eMn RnQY2)cipPXn %)48J9)!7E7y( %[j+M߫N`o'?J-DP y_Q9lx5\^Kyvչ8ݸ?៻X>pZ/ݟ}g/]ǘ}vL2n;5Xf VwQv1yr=;ׁyܠgOKOKO㋡O[5A RH5ghRFa2Ɛ(yJ8WY p77_dl >O8lfihT׎vj>]xKZ`wg=x=zkJ j&UdНVvV뤏ֹf' ]b/Fuhps#Z-O"8mD*I1L Kᓬ z4dt!*8[BPB;,[EB.0Iool$8 5qЂ' LKƨKN&R)G_f99r:Ѿu%,:}>#99nt$T90WY[9MxWZj[Nb6SůhzRHRD;Ssq@\2At"vhX@0M8B9M{dJ{dt{썂Mс HcYyN3c%M]7٤RҐ2mJѓccp[T"-Q+Mw|X,xmЕj_Ck2|s}u]s^09]> c &s`>hc/쎩Zxof pGt'p"6+u7mȠKg RFw}Ûٸ_\i-3 pymv~(- =xLO<xj nkam̗A7jI|wB/PB %Z)Q5O5~k=Z5~k ZMRM.[ fkoM(Z5~kZ㷦[5~kZf}o͊jZ5~kZ5~k^mfʚeٷ6@ Pk6@ Pk ^RW2WՠkA+Ԯ(\BCjP`GgKYun'{ݜ[O[0*;cϥW hJKCZ3lNZKl|*oz'*Jq5>Th1.F FDm|d8O2GuIdMrIJW.MsJ( 1OgRL08Gqd>qi1ilU9s,;i2HwQDV 2CP1̘A}YYYΪrW f$-ZJ4^@K@h.) Jd:[|f=Ӡ:R*h"PS>~-باO\ll06IY,\A&LP4AqdE9 }3Za6+oΑ͎% XLP*J4f1Iƣ%#:s j,RgYA0cLp˄ X*/!X0|mV/"B}M])Ơ=UcCT>}Z`7JFZ3O,K#c.z&yL&Kq}\d&ce: A:qȡҁ iQ77 c '}fƹLKi~4MM.'%s6)n7y]3itiI4#w]ľt~dnW~&q.}VI Iի٧|3Ƥw+v,u*7Iӽr}rX??HW3W8#ތ>LJmo\S_rM{6»2l@_/hg__Y,Z)yN2u>5=nHhyQr,܌₿(u[f-""GC2!޴̨8Wn4㒉vvi\37~l:#5,mxo5.0Kj&0qriS+rgZȫۂ~+Dzar"y׫~DXβ[x6^cԂhN{IBڇ[Gbyj o˴0{[/xL({ne ~@fF̏QR*sEPW%Wj(@G!b,& \օ;$ ,% ']b]7{Up+ѻob=}aZɄ=z0.'-WaQ1Oא{$Wzl *)L\$Qi6 W7 %;+G2U6D[2DL1"0֓vܤ-"Y-CVm7 $Ƞ10 }Q%HStFz٣7ƀ8z-5\}ř1wBv7izZO;-8k3_΍L2=Q`GF0"7JR2I%ͥ|TxIq9.w&]1yOEGޕE(%셵RhIU"oMqL3xx4m;׮v~|;`.W􁰷 I_ #˯qk}9ѳ`95CdyDmcOb/9^\aߣةRE G7allkM8 %$Zxc,ց^רmN"c@rN<ݐ ,d)9U1Sefg=tM1~cYD`gG[:\hmXv/Igu잮]][yP!w6g:BA?To^5fԺmfwwOs8B];f!dԒ͵nw==?5|쀞WZև ;4CtaDBO{9x]E'Lx6FK\ P.m=9'vac3Ac7VHwMI*${5I*U\%IJBR}ʯO~g4`bvǣLDRFo7ۯӎj473~I靜7_' ar2e+Q:{BLOؼÏ[>oT'k%\4CɁikM:M}ife~N|/(D-&6FT bH_hօd 3rרP[T&v7<Ur>OvͺN$xI9#r-"= Iֻ\>wDž~18Eil*:9oUBTBNҐJ)Q;^)3˶+cg^\c+^L8)SΨzFJIgh<ߨ\]yc%2bxȉD[/NDbSʣեf#Ȓ+^S$?-s!QBqrv&"@T"Q?.hs}K\UdMm-7tM/v7~epف6J?mQuuE=".? QoF&F";>\98oI󟇤wqO(Mt0hq&%K7Vd OY ,"Ju&qn3R|Kcrx7::|gŢ hPdċ&*kSJ,?oӖU}n*Zwntx7SjWׯG?骿х77߮U>_ٻcx?=io8S^:"Σu;]X<~Lc;mdܩB8Ode]c޹&ojp:j|)6E"͐dbAI y~NޕK1m)sƀ1B* W@TJ"(+|HǦ BQ&6ݠluR< ti/}!d |lM L$eq]amKc=0ywm9K,y$ċ_N+;>I#iveQJ'%I8T |֨JUk2a"~6s흔O~ܬMt{cp.5/v lo{|FЍҘU`WJOJגJ_S體OS;Ĭ>͔3a7! h*H O,Lm=pV=Ymawyz4+ݱv:Ooí9lӧݙ;=ۭ<q"[v{xd3rv oWDOw7~,ҐD"<+cw1J<YY Mcgu;݌.w-o){ED֢B$ t rkg͟]E ZB9=+i['榄Zߔ3ׅzB%ٍڱm$_ɖ=)|IQ>}VlT差ĢzNb/n).q`|emie5xEH`j}Uqs_UZ~J ?+B\Ut:We֭ϟnVKty6B2&OZ}?8㟿Z'Z; *>I'Gۯ5tC'A^ySha@X 5YeA+RȤ 6c8~XW  v1ymMx' .A4KJ shUI=$n_j;@ Ɩ5[*^g1uvRP"V " b:t>-S}_]2PgaVwGܐqŸq@Wo7GnBqA{gHU+j78b D*5JmFc-^cO[ 5=dUjԹl*[]^e[1Axi~-9}ᲂg'i[4\*)L\YnjOJ!s4Hxqv [G2!ʼn,d +bХq"< rDe K* ZrQCʾD)B:#=Qe]f)4>2Z⾿6u);LJݷ/ ]/ %&NogmV|Ϛ[\9!~*<̅6RdYQZ$IĠ / /I g+}OEGޕE(%셵RhIU"o},8gF?h4-ۊ0.]+;.=1,;o|Fe-WAd*Fςt }Bd(ȱ'н x;r Z3jݍfֻ9tf7܅OZ2j c8]EOw>Ϡ=畖-<?omz;<+tQW`O5_4'-%9'vac79+!6Rj ժ₺ZUZ{GZQ- !דYZf=L>a~@f< Ax1Y!:$by`x<@_U\uulu](F(Q O|.'Qڂ)mk!z6R)r>'^X S%",Hkwem$I~<"3#4FbF24%|̟!QI( RXq0kZFڪR }L3wp7Zw'!&|*?J}q<5 Z@j-knf<_s cQ5(4J33_`IeeJ{g^.f3rIi@-x$ E2I V!!cIeIBM1et̜cQ1#M£ 3b\yhsي*FnjuʽeTpvK.cI0JR  |)t-8b1O@lXd&2#upB׆ס  ÁG "cL3X A]{w㔗8IՓXA}ƭ}WUQ:f1=:FR.l!؊Ʌ%rL.lۮ*%sa\X#d`rG_KY$ @rRFs& OTV~XbY!k r)+t.s g:KȽ4|(X- dC`RfPX^6$J p湂wvVl5rI'ޔza߭5oHX yu7n}r^}XY^:u"S/׎GtjDAEڜ 9N(6*fS)(Z o9Av ݂͞:Oٲ+Jsbf}88J9݌e(gO19xHyJ"^Eo PFvM7m-tڿZ|,EA{,gb+BLH\>IWt&d腊ʦ z)ye")`F2cyjF&ǘgeF$iTO XAl=є" Τs.CFr]X3Յ.]ׅ҅~*51;6|dzBFϣklNnw?U2 `$u`s.׀Kn hWv|B66A9a1Ya+sH{"gW\vEkW^k$!ȐrIQZe1E!W(P3FgEHVhJGVC&dȁ !] \F,(:|iwfa2ܫjlׇS?5-+Ee(zkMO~LĢq)ș h9 r2'bNL-6@qR p \> $472aI 0LYC"\]uV}"T֋׋^_y॓X=##e}e"X0 %ORS/Rܚ}NYvoK7)Zy$ C !F xO# NPHiw1?g,fݸTqz*m˛7.~K/g/3JPn!TS-JBK*wi'W4ٝmmC3#Р9e|/+d%4Cn 0xm*>&a:7emTN!E% 1BthLGIz!&霹, Hchd}Cg6tZqӼ O<܎^_Xy󮴔-rr'ʁ@m:I!\K\)oDƺR)r Ucwޮ@o "gaXů!Z0j0al Ӹt#,҃NI~b5^ѧ!~b2F5!8+"ߜAZ4`ΐg )yVЦ 1htuy<Ӧ*j3S!i%K-:@)sLG=lYId`P&'ZP)OD!&-Uz{?NTyImjNuѩUxla݅ɠ󌙘c*[2ˈ @L\27sR<+z㞑f(߽yC>9 Nb8MAd:! D-QJNWeF%9#aKD0/}2c 4Ɣ4{v, ڿ"ng5NpMT&%3OJ@*4Qn&q ܇M 1mqٛ׷W/voF(VMou:Av+O7ktz+E8Ɂ!ɱzq|zP/=4{Ƙ~Ů%j^!rom|KԯD_.slrIɳxw}9(ɵ з?R I[!RvGD\ Ѝzm{6y⇒F@ˋJ<U&)I"q$wzK&ή٥Rrd>׳-gidgQ3Ұ;rX.f]f0=N.) 7~?ggiܦ=s.^W.˅]k*3eŷIΚ1l-=gҢvU5p$~K*Tݧw%Z\}#zYwM,[v+nO3w*8jTd"8{p˥ʔ& /K{f&;^w1#C}"튭[{Y VFuVz ޔDp,D2*5\d1́9iZEO>T`!E:zGi 0l $c摐HĄU͓t+xbjw]}q 9u O`I'),(USXTj ^SX5VhFo.osdsTȍu SOAd$}y?@+JOqn-3ĕ4s>sTH 1Rgeq8T֭*W2Zކ< 0FC'N wZ?Na4F/&~Pjokb햳ޙ;3g58/Ga? (zmHmFH gXhҜ.kmzC ]Kzk-a$c3AbIDR,2Ω#Yv,` EP ) ҺYVd3p!ks!00kϛFvtHy;%@ p&;{3; }[OࣇGA ~A!WS/(B%B_j ^1AݸYz Iᮂo ̱:2g(E%CƬ۾^uw}AՅQ3,BCV̾;X4ǒ.}c`R M*E>^!˨5H )K>&e$KBg.Dr(t`\fƨIϫh׌\0 7ySK|f^Na~?zv mftQ`tY/7tRn,uL,XQ m4F E$VNxnmf7P53kF'\vG.[K# f/Ϻ5ru~dⅅbc[X&Z^~hְ֫)tQO2ao'N,"+Y:/A4>ZKZn%kv(!Íʙ7M6'ZɁi*aa&ąL|,oqohfNN'[=ow3{fN<0Ie,efbtNMxgW,)`;-5#uÍxvO杺PiX"j}틾f+7yxyLO( EM oI e4V0hv!S 9I0F.b P%cuKxLn !7ߥ]xc]Sݞ߹kq*¦@+B) > TXT{B/>oy1|֡|_.?hÌ AK 5ooTJ3le,{,{ѹ t١ tى RcbN-0ITJ{! Y`HK(';+6$S섍1}*1YREg}*P;pڪ2-J\V#glV/w\CZzKoc٢#Po4tot/l h}` ǫG«IUX$?֚kweHqi&h.\P\Ynx)F ̞=$ D 4ȃ 961G2\{JDa^ A-?t]!T;Ag)7AF: 5< [խ8O`nE6N&r3XFiDXKJ)F¸<ⵖ.@7PLFV#9+WE>]X_nA9-x% FD#I\G8M\\w#QKm *c?{tcX\r&_NCv"^nD8$`P*m#&D.jM8Z60*Vu@U&虔Zr!j/:-o$:NX9@" 5‚j&'<|UZ[*J׷_HOS\f=?YMf&F^g-BW/7\aZb/x8Mp\qS5.zEnwA Oͯpi- ׳ax {ߘ[|>c0?U*sѶyI>RVLdH v<9jۙO_jx1 \[u]u6OqAd t4u}(=*sKkوV=>X2D!h1"tJTă@pF**^rY;lIoB&{Vxӟ~ٲ-C=4|zzpz V3\^2KTB=gQŠ_k~|͏5?Jn͏rVk~|͏5?_k~|͏5?Wi5?_k~|'U(ŃMY:9'_>T #{&Mha&{q*F 5} @@ @Л?e\L[ /%AE%LqD)rN3H(J aRkjcƅ5sULFE[Q"gOi\d/7*o(E=AQL 1Zql-d΍_ c|~N3d bKO@O"knEL. ea2!FǤ'as~?,n(6E{k_z?So6ɬojiǻhEgtqa?[ŴFQ%~zRف?0ncF:Qjt ?{/o .=MYQ7}|m|X;?VgDUCnٗ0gfuF$}ݻoiosAyY zmp^|p(SSˍ!kD \Bz+*m ,C~ydkeIgogԠn8;rq ;ǯnx1zޛ6{GDg#D\T`Anxąg{'p2U_#Pl/3GVU#_,{u'q=F|V/ݚ qXA~s_Sf\+z*arWK <۞%}wȲ}aFs#s 'YpKǏ:^.7?id~Y>13|/ :kncoinWl9\e+p{.7zE BD8惎  8 (m=.VIG PL,1"MT /6F [|F*üc*PP  h@i<  c\ܩxb|ZpΊ [6Q9Vtttĉ4J`-M%Laa:ߍ"k%OX3TnDa8lGvw[pJ#ʣ h&QL гI)]$v9QZQ+vh)o4д+$ hłC"2(S./&0b\J8s1$ *NcO,;Ǵ҄'Vؘ-Fncw!kvƩGYZbȎڔ-q3Ξ,2>~ >E|&X8}6Kl}FC~+6,$j"iD"fRݍ lnͣ{ ˃;3eqzJb?4u[ٺw7h}xlŋC3,>zc.O9mϧ7H|ZL0JxIĩq!jT Ekx&剱68֦uX[)Rn}[= @gw||ɻ~l>ɬC`J>,eUntRn+1@Cnp/  ,(J)i=;R9KVPY Ȫg6T% )#7%tfۧ°"|]`M m74U.#-F2`V<`XUB7  :5X&Z^~6ګɪtQҰv@d!m[b2="xor]V\u['X-1#NH[ҴBve:y|[f\xsbfc{ E3u<>;w|two3[3gOLs)yJދhᘰfQ))q;EVBsu<`;S*-)LD tRױkrsחtPu|(5'BO .<&yDE\,e.Qpʸ6$$RnS℔M>sfr L .)GyR8 /:W7دX 9eu ?O +o\??N;m(C4I*8iqZFFӞMjyǣYR:yR:y'R:sҢX3)#2")e#-\A:Գ,7Q,@x=h.PSA% 'FYa~ET#ghv_R"';-Ot?+E+n|Dz!W}kxocv8+tg@s :1 TxŒ2bI\KL]/\U%}bIB0b+&Y;m<7b07%'ii0cx#,q5$OT5$OboPV*j &K@#V d&DO4"/[Cd5L<_jknlC.p`XB[6E'< ײ$ÃSx &J=Q\!(4eI˜\_")GL8 *⾢, ;XH.R ڤ)Z5c+nQ'#gY@GIs޻Zva5*ղ9՗^6/T_zϥpv>z]~^^H~y% ݏ&ӡ>8l(m GDӧoIwt ݜ}"r?TJU>?٧d w> >i-zyL#5MW} ŤCŤ>BF VW+pɣ{BpZ wN3!{LZĴF[a}ُ1S;iM9gU0. +Dn%mZm)SlN5 $}ʦ':yt3]~=6;vymb%* g^n|wwW]ұuLĂJ*#~+c1 z*gqbf1:E:vƹ!nnmNЍMVݑ Izʫg)(GX(/0Ra p~v:cJ:6XU hc2]/y;2K&I_q~PWO :hv$Et&`䧜XbK\rI&&L߄| Xp*I{mg_>ż=l~ ~}zdzs91fߖI|:/??|:%`t8*p S>͟ y%PkEv.v\* a5ڀޙf6vY`ۛ5vJQoxޒZ}:ٿ~MD~\Xq1iqvHkeJ&/Y_οd`.{ P[mlU|tޖ }!bT4>M\W-+|V^/'|Bޅ/rX$Q$RӇllXDi+{#zhTmmbm A ]y'Q`[]q.f܂\~icp?5/Zbc^)x=ccG2[ Np6A`є0 h @ "ڻAcquakmy^ s ifɯvk(c%[\M%aѰ!*֓ *\>iUJcә!F`E+)yppܯ|x6BЋg7qy29y"=̓jL+xg*ߟ( 싲b28WCKi1&LvB꒳I-PK))tQmFLƵJ$c₵B \6j V1RS[ɆF;סJ2j z>-vt~R˃hY|^K`'yO|}a%cokČ^f(B<\]QY[}jZ*-dZ\lJƚf6كܹC%T R5 pj*sN'e-=tt=؝`s.:Yc %GBhDJIp(iz!ʵ3 QtVKȈ ҈bjJ,l81ڷlq[,dTh&[nx$fudUB*XG\.HRZ3AʕIES*+In|فT}ж0ٙTRF`^˘DAC6)O h2pPU'+dmuN&%O0].vxcts!x@, QSF9KPĭUфsmIS3 wb|ǀ~慠启StN;kv; ש]Nm}^گNm\ݩo;57pqtR87mv:jpגcS?v%wP<C .&q`I6ZzY{r9<(UuS|3'땖+ h١Zn őu kSx ]n6,]֢&'g*ڂV>B,BM)C*^ܸ:H(2dzBv]s'A-ip׳}"p":g`ĞltI)UbmC TϢc'%,kwC T0dPl)s6!(VEdyzo@FA=9KP -8j1x$|R%jtRyG%HYSoߊ_Ws%R duN̾* 6[ZFF6d l MҩZZ|M߶,FY.:-PlrEVLd2<(.hUQYR5gI ǂ/N9h2xRs,?esz_?d4k5FN)esM٪T%.ErqL]ۦˈ.xhl2y򅉕5)Ol)PjV[bIje"]α(Vtrƶ t2 yvځo1 zh k-r02\uޮ _v/coKb7jtc!H깫^7em\x`<ք\u鴎>ۿߩ Pœ XP& *?,Mb^Be6Wȑd,yb-S +'qRf1.1X2q`Rqx"&kQ,@Rej$Ɇ@ʋ5ݦkx'^uZ Yt(q*ą/6hՉ Mʑ1*_R0ߣsd]>bѨn,rY;ϷKY_=zvv RTLY5@UY h(XLNxRbY/>./4=h_R11.wJ|dg] %h,t4h\~hGåNNj-G`Mؗ|㩈l8_/7{ykr|As`"%/.PI4 2o !fn03AqVtr,ٰ};ZAя埅&=U,#8@ɓ5Y/dHsǬC$X46 f3+-e3T0蘫ۤJ0A̘䲈A LhACHΈMd*0\fr|8;]o m;,jWSZ7wq7RiY)(W=N gBO$hE9(P.(Kߨ$hK91\""<aH)hN0]*ՔD9$>X#23a- 1NEY,39HRNJl'쐻P;PM&<_w25 G<)e4t5GA#6_zn&5_3=}:ގcsˣdt&Jt0,ɐQq&A1/M)6g4S\} }5=7#=3Go.ʨswx` ٚXUlfY*uʺXN7p|:8{gYІ1@up^H 531Hi}5`)}ϡ mUg7BJx/ <duLw܌́\ڷqKG5>ȽTB[aeT@J( 8ΡzfEtP+5$GiPnrar5t&=TLi1~9 r]A=/|u6n1b܎窶X0C[Wܠb4&+tL̴DWdž/Zxک Hjr_|9s4/>mxfZo+w\җhYI T YNCKlجݫg BҤW."q>2d%hY0իb T[jkcoo xJ\+۪!ͥ/},IXʹb 0h}\gk@sJeb^= ^&Ru1]vKj"W U#:twyX coj]lպu9Zb!!X&B gFD t61diAJ#xJNrNHg i'ⳏۜR=O];gdA9@3u+^ 6s"#+@ gcDʰ RO$E`JHm* /$]61ya\fo*]Q:@GEg>qMZȠ#ʁ\Mg=k{;[j%lH)N-b[˳vEo~=|9z\:$3.G͂:4c"7:,U 9%3Nڠ<꠬ρ80K6dyx髩%*u]ܩSr))0 )EOH༷1q,FK ½.(JUwVgGqf߃8qs=^8w7?=gS[S. {VZE뾷\6]xK_ל[K_QY"]w ^wim$(P/ v_{7)zo/r5Zt,4rz(-`wXil_s~-.OѢ(YuϤݲ4_ 't]3$^^4esлK[O s?s?s?s?c@a3qQ ?H$%vG ,c۲az l96~n6~n6~n6~n6~n6~n6~|sTH`-lH*E\%υR5o"WT> I%Qn~Ok^fkƒZC`6N*&Lׇ3Q9GQ8(R l}z`gThHje:5 Jd `AE *dQ7'Ͳ9 L ]jhy嘒1L@3JB4ҥvN8c۴oǒLڟ_3IƁf!Li[ ۻVx]R;ڠ61hEA2 6lR"2+ch ! SҀԚ8#Q4=5؀g<}mR1|WRgA+J5qv A GSXNYv6;Se{D+ ,QS>YD`|i „C NbTH2>p]\'ސ/eјL/1"3`tv.[HvȵVnUaI^ a:F"NK @(YaEg0%E2j:6 M?Tug'еىuh+6ZD ȌH qkK \aT1`ww{sUrFM] ;%vp2Y@'-Fǡy`>=フMm>o$̢?7錥-hVt[N1~QĽsh,qdzI#o[3QĀ!\T-NZ>%ggx2~y6AڗO_)]ޠ4.?ƕ,)nro/u~l\N&ˏ *p|}k5jƧ/`#Y&;jX2pU8=-=Jɻl R R)xTKB/*!?G,vry|r$b>#&?K3o?~n64 >;N݊$+aIʥO=o `xi?}DA{Ug2&':/u3[26ղNe:Z:4!kBs)A89HͳL 1r/4/p> UK\c*Ĉ:e9}A%[D)@I.3tKL1 [Ug{lm Hc߄~ xrߕ9ޒݑZ|}?գq~~^&0O<^qIکt̃@9) =N2feH,k<)o坣w|6},e8/ 0>+ Q$BN8%ojw#{в63jY;~߷-6̐G4Q)&2K:)<%Y8rrt I󪊚i *X!l.y-&2zM'TVjlWtq۟uԬh\K[E<:yd[Q/ï7= s kru{}ij:BUޞIE&0d(eօu}J3bt$C@?][Oɒ+<#2y̌4ҮV훍ZQ4,Pxr)4#%5WRBC=j8ԯR+0 "vӏ]tFD3!℈4^2Y,Y! &Q+ %LreRєʧ ߘXMmKSΤ2{:XƜœ6l r]ꌈ"?#G竡 %"tE;ℋ4s ) vDMR[TY ꄋOݴcWSMEv50L| `<fUm)aHʫL>es,̣y^u4!Zv8 WCU]WKLD˞[%$Lq/'Lli=|~x^ (Fad,(XZ&̎n=m Q M (wtVc 9uL?%P ~";(>`rgJ1.1X& Bx"&kQ,@Rwnl9cb ?F˵3KYt(q*ąɗXniՉ Mʑ1*_R0csůbȳypcٖPdmȡ"˄ uZ^P{JOxRgAe-h~3<苃__l==:$Z]A9|v '/i["~!3> M%iy;#(^ ʼ;SV.9;_2hX^ vLIǬm2 YP/W4ߔYv?> Zݵ쾯]777YXX.J@i3 aU¬˙aѦ@W j"oa: CQ>#=/<e3eaT]iF19[fellޫ|n:Fv 9܁7ݲnq|'hN~Evh^M*Zz4i {&%Lv~^;ɺÛ <8wtQ{1@(" 1Ƀt6StvJBʆzuc(#ɴ*1)ĝiyv6#ѡ|7*ɧo}5aqwn(hP cщxjWKlM2,?簤~}VmV\'g9B5N=kYWّ0kHH^:rХ|XYqEa>@?W&?OO:^o닟/T?;vdBBJF91GUyf9 Y }٧J2YPI0ʃ7 kPއiӓ}.|`'D;ҟgŐ}:Ŧ8V6%1^R`bt{,DmU]yuNk]̖"UKm} WeN!|dVәgw~-=\C="KM݉h7"c&/t}<>O{0 =ːuꝸ2ف"xEF"vj!}#(}&1ӾoM?~}gLY PϾdjiaFi2*T@+0r)QD4ƉqZ&{p:"Hf)@ ﹣qzPj闹qCr;)7#6ɫnsEDžoyvʷ3?1 ^)ec:kJseSR'<> y0 !![+j=VʠZaoH y O1Ϛ]П 68|axV5aU@(j`SؘS}b*|} XznV}GcN-mG}B=ʁ?@N|7^t\ioƩjTmt!rM=&7Q'm Kbs6TZ Mҽ _}"pL̬ v7xxuES~x1gejE?>jN pr尓=~JΏ,MvVrT5{ 0y0Sa9L< O߾bN}{{6(~^kK&kKH>֔cn,6j1gH!SZRi.40ףazvӍz^O~|Ov z6WpZ!>S ѢB"m,kKßo=`s⑃U!Ǻ4X#MU]OTe]GZ^kiI<;]٢s>隅X޷LfFD:D㒬&B_O5*10i֩h)kdU'c] Dp%e>BڔJokgMmG\'3z2_٧/mVօ6dS}'xIZ ̷/+3՗dzֲfOg(.~5,6 @h8*wmq,clKb]X$X$A}.aqPWyHG3:VE%% Yd}dOY]bSҩ(\tp]ZdF4#Tۻ)=6}`Rގ[H}[zxgv#꘼Ɛ0 RAJuЪ`"ΑŒvOE6^yrt ,V>Im |gWWo&|h2ڄ>U~۫_;οͶD o;C{^構;PvmLÒ1rQ;JQL6 eo2BQ„| H,?8\TaPiBj~oR3MMgƠ6CI8ؕy\g2/)}˸)RrwK)_==Tdk3Eg3^LT"kB#_mHĀ,u ZAh$^g5O/|[ο՜ƽU|e& +5YZDeIھ&S::5" )U2O3 br^jKe= Mؤ/?1nX/7W.?&>B 7Y8//uP!_O?K/$Awܲ8rX?5ޚ;Q<_/WIӤ[֧#f3beXGZir5Q}9͋~2|zOm/j>=jHf7uw8|j~5>nHh}I"^LqrOjlzbuӯ/i@>ٛQ u5 @!.ѩ5k8>/xz\:IFeQ 9, Fa)?>ڃf3tr]үNYߥ|ڼj{rx\%KEFm"{rSg*{~/uk#ib\~XS!qi\'o{_݊Ì3Ï*Qg5sW9P= zF[|w7qg:|ɓUbV>< 7~rBf8ͅ )XaGQ$c)QA]J5zc;j堶fC!4 qV`tI#M5IJ@R&A1v4;XwLKƎxrx9fkkryc>|z| en4~]wtQZdoZRsSJ.d]0J+"({9=( Pz w1>ȼ ʠъ^JJPk(lpbio<=@}[)ꉯZv٭^"vU{OfZnvqDGo fexϚ0|2MWqfm7K*ãgXw'~ܙKŰϪ.xl)(:RTΪVH@cXoR\0l.CA2q ;hV{M2iBd/V^'^'cu+""%)ڶ@F]Q$֡hW*1*F)Q,& Kkfyڙe ݱ+a]DÑ)w4`Ϧ;,7wG h/׺!Gzudѩ"f1ATÍqx))N-\al-_*; ^`H1 $d(:̠Z8t02KkV`9pEtM@H /kd@ )*i.* )%MJfO0QEt >v߯Ei8`2 덜[=uGO} weVbu7¡&P!LFK+q@\G7ZUWb<0ad:3N@ [<^NxԠCtEX"5k6o6^z6j٢ؤ؀VJZ$u,ʀƅ(%@d$]΅W46r9g-G<: 2Zc(#-@AJqTJ;r2gWD-OSẏgUOq&8~c^8"]jB*'Wl)'dBVP0EhZYQiYޅ[tv_8T0ԤZ{v1Fx :|n])DGD&+(dv!h֡$)hBQog>mc˴QYukHI[.f{0n Y?V D3Z-݊ݣic=t-W8*XSi5q:iL<1SՈ5oCTpw\h 8VxHD 5Cɭ2==޽,O|OĂ` !x01IRdI""d#m 5^fg9 >'6klkAa3YdM9&/!r CVgiTu_dѠV"Fѫi7(FfqݔgTیwPuXC\8x[5*.랣T \oTeP"#:Wފᄧ I4Cg8T02 '`%ӼǦt^DᾟւRXCN)`5t:CȾx7VP)TDs9D&ku42&@f*ԺLD#B@M$u`AFL.6CK@z CH@ٙANޅ)DA+o- y!E"a!K̷ûCP;X1{m?QOn>Fζw iČ۸ג@4ǫj~j՚FJʄ.T:jɮ#\pюqPkv=fȺMa4 b)PSɺ1YR6$3Um'k(SaمL ᤴ2+6hY߷Ln946t@h8חi[n5=~i4K喭(>DQHV`e26$ÀݢXqgumԣ6n4QώJ*͢c OosBzez r,H hڶ\7^V4Nk X!il @,B9[2&dac=k6=lpMUuA58i#xmZ,H!琄)΋PsEBFQ: T? vL,NPz˨5}jWl=u}[))[h:evhaHB&z?LLδpckWxb5-s#0bc@$d>vqe2Š PY*,;j2e{UzU (wGJk+G|~p'tg m>.Uq둯D"L싉FBvu( jH.%OR*@|TDIи!:%Pd+9>lYt] B(dJAJUh6(z|y~(ͱw_cg>w#wgF4wPZ7t:KgG%@:R]ŷg ]A} A'dE$m9RU^@ "qJgc $]4Rq6Jd%B4xc-ԶF{]6um>g$A;!E2"RrO_؇Et05}l:{#f6uE/7;ٻٗy[z>Aӽy>Ǭj}b*~̭|Ij{9e#:Wi/4?iw,%4&0gw˳Ǩi> Nv}0-Z8Guny㭏>Mw=d~7ٻ6r%Wu ݺ“n{Zy#%C#U&1d\ccȸ72.7Qbh5/u!Y^ LBDbLXsآS4x٦j[yx5ɘ()g"2rwA`P%B6lJ&]~Zԥb>=_u*Z T!26LΛA53$R 9ct p35h{KZ+}R)cxԁ\SfBg%OdYsJ5AFM鏦pFo0ό)vF/3tJm=St{oP~m t0K(L̂%9@oTLI-rU1Iý /)֫_g_:f߸JhQJvZJR ѪR Z9;وQL{5,-jSFkd[^!q)-6M k")#DN,ElAKFP)B,d m ;. +pۭjR])_f%OӴ\?B8`ɗ<\qjaOypdrKޥ%~56>_f;"Us{ųl]ؔ<*i[J_m=xo⒫ u cͤ5('>>_) fH {|gP?ei4bJ5/k^+(1C4%H/RtJ2h+|HJĦB&6$na+'ۆfy2蕯9m9EckfZgL{}=xy箶Eϳq w.bݾM?^ڳu>/d46"O%zI c1CHiSf_9;*o][o߃^œukGP;2ls5K߲bo1{ۑ߼Ɓ Wh0'\`r('\UZI}?Rj9p.B!3 i59tb?. d;Ґ:p"uPAAzF]Ԁ',ַ|Lx$ }AfXDTQ8"C3ysV?Qz$ I1ӎ\ B%;U*YBtJa,07Zbɂ*gB lՙbd&jo%DbJ9,P&B:͒cqqqŇf5<5c*CNp+9kqp% FmY;vKE~e[W,= &? ߘ%or~*X%x7I 7=h^3v# !ŀԫ80LU\c^n*=I8Rh@p;+WU\7;5pUv7WP+9 =>7>XJc˷WlW+1V.굢[W[W.OpO~Z;w?.ǜ{w8~.Yk`@gW١gz$JA1OX`Á* ;ei콹W#>|Gpz^8yuzq⾨z\=Nڭ/ WA=Wt€ l`ઊXZ%pUTr7WG $WU\W,-IwR"\znX`'`ઊ+PJpUđ I"kq_WvJOfA?HUqNwÛj1-* lٷB*QY}-GI;_QJ|eRy}m&YWf=wd)ij_&Yޙ۾O&:R}b]w>wOj/԰aS^LNG4oc͢R^߱VRV:C :pfow 'awX2Ըx]P:uz!Ti9F@Y wz:tOy|IQاіkѮLOEB,3vg=$O(/zTFS/k `ͿhdGFM ׮M^8!F fJlJ 6[+:jkAgr/uZK[g?"chyݴMeܑ*| ɚdy[L> $:gt/O{~ݻ'4ֹ-<.ՃɿۿnSM4[mCZn"Ele¥Lpĺe hǂdR:Pf4Rd_9QRl5^r'ם]nɷc{8~ݯ+ N>9~W?'/\*[D%ٔv ?k[OVj; &9*C1JXZ'FIRhE2n t~yr~qU#-:E͏5H3,ztHwE/p(o0QӌޗfliByѓRp>eϐK3#cT&0NlKA.՜sQB)mD$-s(%?A c0-+>js1b+ MQ(?eiMZ?8]Qd|t@ӝMz݂gWWg|*Y@ԝF.˘/cчŒG}郚md3^wcyv̲'lWWk!m:nslKq_ݪ:lUfx &D:C"bQƠ0er)&~+ (Q Ɋ >y=go{&x,#ۛ9Q6Ed*ķ%G! Xb:0 59y6X $])R!8 *"%j{}aP{ؠLc82u56l/{mYg5잪׫UK:w#䃢-cuBW]kH,Rm݇?\ݪsw{*.0{sY!&A>Խ|Sxkj*|ow}oԹi!sIK=וx]nvdN+757ݿOWee^,}<\27wwu';w+'ΧrzU :pС@n|̖>3ewgأ3Qll_vESB̬3 t'Oֺ,Æ5 15c EhbmyDqɃQXmx58%L$PF J>'%[g?َO@u1ǯ[` ^%9ă hax_]W}ȁ(Z T!26LΛA53$R 9ct pkMuE~)BkY*%Xˣ:$"ew%2ϊYܩ@ʝ엎o\X,H -jPHTZ Ij"Ab,F푎(M/iu7[;[}V DJGۃz#v{{ps]_4xr~t1'8%~~ns/^j#ln98-$c{) ,l~H @O)"f+|9A*f4QL*DmvL V6]zgmAAؓH<&sΏ纘M`4z7wro_&&ӏe)pxUn=ͨQ?_| 6~ 1q|'VW~5`l.儌!@$HC(R'WeNBJA6 22ʚ=`&PaQ*dH5:Yf]ׁ'[rl']{خhUZ6 _\|ԉswtK$kuI4!l֙\g}6^*:3(nP&ʻt6_2[~Muxo^r ` g PKC%g)mijw%EmR+cW.rK([iu@(lZ[f͹οfX-l&B.7DzW/*2\'<-)|o]'Wn%U_P4duIdv,T"I`W=P_R6cIтORKkmx]:{uEǨfS[,S]aNx,}d@[o]tUsq:.Xf1X{:bAZ@h [!عXJѳkXF{Ȍ %495D2aF+`Ʌ͚s=Bd5bǮQ5jEM/cԟ=D/X/I٣F1b) !AB(ʅm"ւd@K,IRL1y KV:M>%8bjPҬ9fyO<ԑ|EչOfR]vQvqIX(aBC(yX FF @1P~Iq P^u<0v@ZτZ>bOǴBYˬzh ;o"@I`;qe AhHB 좉e+#?W|KZK2)ir0Z/,*`I!t)@5 %5;]`/%=WlmAzr~_ݲ6=f5zJ)T(tsǒ8Fձ%.Mk Lj쑻:֍á:PwjFz0V[@i+4BvXHg'uL%F*$"?; )e*?h֓ք|D6dJ9EEuZI3 5نQ9#Cp˷lp(e*{ThZ|yUQBIZ\+%Fu1[J [NRIBj[2.AsinǨfV㜯N=;"â,()=8C\zIjm;6ճ JNieQ]%P*϶MIA≜6uc=k֜{YW5uz|})ą<?DzD-D,Xk0T1 5@K|-4,ҦP9$m }+EddE'ov \.2zl eV3T6?y~s3ӥ!3dM* #ׁu"],1l* G1}o&;K77߽&gfDINJ#PJG2!ot"9g",U|2p.fUg!i&"*k|P mt3wIh3aQhY.g mL& 9G;Xl&!jo[ ^Ěx7/V޽:=(!d%/DYkWh^y^o%|J :O>w BQPqPpPpH@B) T-Yc''=Q)F88'TJd ;tOQkC62x"١,EnM\Ь9_[} ov!{z|eXI%zȲ8QFokQˈX䂎  ! Sޡs<7dti|1צmhΧ5WG6'~)uIr\q^$;zT pߩ{Ѹ^XpiB*vt?v|(%/ߍ{9'q6v^ox%Cu|FaGS_0ln&YksRknw|ix sMB$9qw>>P?߯?׮/NyQN]Lv}9c_ ͈_2CP/h~wůuRօ# Y/ A F$Gl#Q݇9tM7w7]ٛ'հ?OH-88vVkA4?y;=1vslavQ$@ԑqI8O77 ?qG_K49Bw4dV_JY觊~߮)GGJx^eW-|m|^ing\~OV*~uBM=>1hQ?4֝3aFrJDO.*Ύ2K)p!ƣQeC0<Xw42zV4ZF۶v_W]Uoq3o<-ñ$ UIչ $I"",UaS€,ޞ ZK_=CmcE#W)*%ZtB'Lm.23 ΂UBPj67 6U>^3x%x_|Y7C>hes>Aڙ,-+w7Ki:u6vPE@/?ta '?պ \tOؿ?J|֮JCELIs[F[^IJs$o_sHNxr^|N$Bo Mj~kE'U}fbۆ]GSh噟z V:;Z =|a|靎񇥋~X[j&j?<4>7nlzvy6ZN査=sFG茏3 v@0xE xCNK5)>Nq픺ˋ /ۍlPFE$R";Sk]Oh' J!蜶EAݟhܜݺioka|`fg1jqDUcT:j5ntb\a yk~Ǿ-gcsQ9p~ >L)9Ĝ^Nx"׺~&^6`Ag~Fb1Unnq8`7/ E1T^ W TLASe %}щ_LAeO:KJDAf)>vo_w71; ;,y>^^m{8yM(>ݬ7~zwcD~t׉^ mPMVxxn}.\|bq;#QZ_(!"10BTZ+)#G&ш˹[: &i<ջ4A L!AAEre-}dKLwdUAJqT2ZVfj9+"Oy*]qVٝ~}/Gae:) 7qwz'lYE:`U«C<8k)n՛10^?Pww|~z(>^ߑ!koSn̒9ϻövãnGƼt3.քsM絯Fgz|z_ <2vsU\MZC8sa?7ϝk[Qo5E?-c컌s>Y4}KzoF&-41-^ȒRED)sm .%rHYl&ˑ/}7_[)d%D WI/U(YY'k_TA/E &kzur֍JKvӅ;۫!/=VX=\,|'Pu_Ne/T6Lh^6܃}uZ|YD~;B:F9h@EaU!T1|a=a5:y="y= y*b2 (ʚM%2oP2k-F"*b\DGAf袤LAiLޅ0(xam%itiLqKvzc ~|N[sGk1}>^GZ~nby~/^ϳz܈lHW_Jҹ#gů,~eFLP_ԂxચX5zV•6dVYGW`k\+UVZ nWW`{^OZ#kX5zVNu UBKܭ߼=qk_[]z-A[o>Wd1Zn*1I~Dޢ|Cj3{sޗVF4Ej5G\rǂl- ;JW+P }'6)Z0k+k蘖`x4D|tV:3++]g^Cc.?8e7!O7N<J0إך; '\,|E% //?߮=)͘z? "owjm+[_yRwh]p5ǰxkL.)) 2bXO/[5vqkD[W3tOx~Vq1p|3H><.cwKvɘ;DÏ BwwECʪ-R,BxGߢ]77_ZͥVYy4}nK\ȡ[HMonnPPQm6mH4~OnP/N>'W_$徘&8(zgՉ'E<6>{!g,?t1?͞ߜ&*}5uN P[i5}kt˫t~nrRRz(}Zkh@rAЌ Cd4QP)Us'=W3x(ArRiTh%`eqIhtJ2h+>$%$0D`1q1WOxbK͘Ͳ58ߨe +Ϋ3vK\16"O%ؓI c1CHiS(r$V]wj*ۻ<ԫ_sn[=vo^(1ZK>xOӊXY/S&_` 1RIv!uDBIW9u'ŕC)+ẄW(W\1ް)<!Ps`8lt>c*^{c[QA RA9:$$8 b-R$} i`iLc!{Lb,{2 u 1 2(@&p=$R'_^ee@*PNm :L|2Q\XH+Y$;U5SVPlyQW-/Ҽ~Q/cQm6OTf%/zqJ-],xN^ QFh~^tAӡaʗ& 5burJ@Č6ZSdQ`JSd4dKJ%A%@mR1Xl*̶ωJos2h/%/ R9#c;N e[,ܷYꄅЭʌ=WeMoa_j:??Ο_@[r\e.bH%Y`kJ%&K (J-R4ElM65ثQtJ hϜr8FgJEDK9#v_s,lvL=1أCбt䵀lmhmy#)2Bpp'$m4(#df8:[ƚE42R4LR$fQ7f<}PYq,l~싈1" 'D|hs\1+?Qz$ I1ӎ\ B%;U*YЖ!:>086Zbɂ*gB lՙbHZJ*Dh-ʜYe~ԁqq9fd_\Ƹ'\pOAf@ 4e$5J`% k eCX1~xx{v|t5(>qcUECܔBُ ͳH5tO~V*&OM}bPЁKؑСH)a8 pI옓پ% O^ g3N+* "J  Q2bcDH_c&!"1FM,)A'-O3Yɘ()g"2 J>'%fqY.e `G 0[:ՒW!Jn6?NO୹>лh= iuuy4tXx$WJ!gN!yNy'ERh9Kb"O6::cdx(Y\c+mf<5N3G()izݧ(?dO5CnC ^b3c Y]⣔6HZH!'Al8>8|ԁe~On"+1Y --1OBf ѪR Z9;(T2OFc2^q tppmeZ$tL1`FkFV~^urKc}ҫ~EzΐeU[rŇ  _@:tO9}h;:^5)O *?<&A;9*lqd/)0|I`ƴaYްuMgi&]wޛ KZe (\0(*LX WK:3GI(G1yƐ0̫#&up Rf.E"]i4#KVhF8uaJ%3лmۦv;'BTO-m>C m.vgW-gi1'v!@=[-D/A*!Bf"cSeC(&XMLr8Ni+*x6슻,OjNB;ꟹ_ªT`}1/[ 7=g="}ЎL=ʒ qbG/Ď8tQX.&uJXM)Bwފco҆ֆ4:mx9"mx9 mČǢфX4L ]`uq.2}ECJkQ?{ƒwV,byX<u5%IveZX(e[N:]bIG.EeUY2k)U!s }q vKjFdGJ=pє,WosgT?c?Az8=|#i4ϵ!nl%qh]),m:Ff&lf(y}gmDήƀ׵<`x}חmB` BT3򥜩JF:`WO$ے[#fV(Զ_r <7i+"Hrb`{"r'ƽa u"qG8A$wa㨢%%I(,̌Yٷ{->QN-nAH>\VۑkVlO#t7xfhPLvΓ{‘%༶>p!8@ $h慰)eZSˍcxuMN[(`x $EMN@pl1't \shyJ6[*uE&+1|cYDUІP1.Mm6B1'i@0`4:% RAմ yҷZcG P! F0470Y /0EA_ b>nƳTYe:svv\F (Oܫ27'S >}$Eȝ0VXu\)JuJZ\IP_Ӥ-;hKmuGL]:N%o!qXm'GY ? |2>\:r_On.,r:WmNwy!GwvFОZîs>O\:LzӮ*QaD Dq1"/Q5v#@<k]ABg͕ j{vipgCƜR¸U)p&32L8!14 D 1aVds,;a2Rh6kx%#leBB0cJI0gEe=Αz՗@)R{}]:F,-Jq͒Y+%ZLg3NK娋*PS~ vtȖ+_zMB)2,D vY$LQ%rt܂V53i{?mygfG˥Bee0e N(bb%‘hDUdo8ʜ&'eΒ53g7 F,ӄH%g-;P1JYƅO+T"3Q>< ]d{m!n)G5E/Eȩg*+6X eaDLEdxj,1գzlXsюjrv`e1z( kpawkB4ilvܓHuBBl/^.ل#{duVWKk%p!p`HM2M/uJ ({jobISY,IK2~ 0)ukf< kRB3p2\J9B8)< 8BcZ|4ʹ(kZhcM)j~ڍpkyDҢMQFȼd[T "x+ ID ϔU9#(Dz5:GK1|KM.j87$ZKQBhRȤlJNF  tB+.= TWNyMssF–NYvɠJy1HGcJvrJS{*}Y(vaPn]ؖV%3,9"s]MJ!lxR#kl9]]"?IK (|lRW#x_:Av˱n}{Z}snr2i{!ͱzP>-{|h1!ÌMJ$M۸W[XRv]yhzXKO) E;ǮLrXD_r0 H9ӤTQ]ͻ +9&z9&xF)RjťhS3A .qƃe >/KI JbN`S!tUܽv#z 3vqe …[,ҍ_,\mkYX9X.0KjL`v=M(։oNɲ 6/oSI\+EEg]*{m+:/GGSBX>5 $hk*Z鴻]^ohq鱻7nF"KB@-GU8'_[wߪ˖?Ai]oi]J|x\?9xe! ѤLze% 4Yb`Q-%#L*ƙA71gg y(;N#V[D"!Mg5z%"#8UgWM&~|SB>wC,g\>m $$Q=vLo͗ {jEl$ HY_cjήHI3R5YXcAy# ZA{ٚρ80WA+|FuN.>Y}Տ7 [\ /omGG׏ x*Dն:hi2n2{t Oz:[4wŃ0U]3sQ:rxv@jt.`59_:8dMW脷5r<̧=Ww NqAƫ ]YetM^S TnDsDwZ8 Z 8+MO^aP:}s}7uS_:>͖EEӼy#u]_F9n8hcek?n:>F~wJG, !RCP9i ~o U 傰.EyXK*pEĐyImȵ[qUcS gzb2G]g³o\×!01\#ߖpVm{@@O~9w|Z 08oDЍV4V+%\^fmܶh+0,SXJ e g$C$1xCIe:fXO"D.Qk)0Hn2ecrYD2 j$OE-t.3[!Vcmwͧe(|t1cаyfmjQ}C=myQD(!YLʋPm8z8 te#Tn,;mtdYTpl |FIwmmu#!kTʌiKRT{N@0H"Buw9Ow9L:"Zs39#x+?hSk;zޓ~ǤvNE,?NC$g> ~JyC~Ðg%uaȒC~KK3( \L_ =:*ק#\3 z.ָ~UnrR!DEC-9Erѣ?ֿɎ+qQ={zQY\qw]gWs}Y `Z+U  imi4ŋ Po?"+Wqv|eZ%o6PaH&)zrtyZ&7fc`[?Pkn|XVW/#Vc7' Q=q1W5n{˲z8|=Þluh7{-x㷋_Q/:nN p^|r*~^+,{[!):)W_-~[rwwC}EGrv>j=φo/]\'`˶X&[;L_ҋǤG'J#Yt{fÍ/~VbTc]=)t9C)X,|nW;ߋ5rG5{Ct*c[z!dl~7.p|L¿Q0A=Fzzl[iw:#\մ}>ִ^OZX>{e[Hk|~}}n{>!Mv:n?IO~n[)yn9M*gПFLZu([SOWʠ8]=tehr]Th]¨&u(;XڠD0q[SV}az9/A)}ntU9oN_|MST1hV񮹵MW˾踟lms`l?9BoQ7W|ݻS0kõpPǙ^leSW^E8^a5-c.GfqCI9L,Od#dxPmV/8xq=W1;jkdc2X3 =e< ]>_1O8AF"q*D"!ѻ}nE_M&U'}iu7ߎONF}/ Z՜drwsBsRw.-Ȍ'QxBXhy<.7X`-870Dr|#,'*hk[oxJt*E@wvsZ{(޸^ZW#mGd) 8M];ON@ʻ;'薉8|0ו}=|2gw 敡bnhP"|Z+[,*IY-U[iGix- <׻"i >Y{-kWP?Ǿ^kp|Z+s)ZբBl T}Kv)DUq^[Jo,&i$ l\P)&$PƬEh+QŔE@-yMu42Rl쇹tԴͥM8,y$8B[Ul"IZ2 &PαiVkѵ^龎ܟҼ=֌%)m]3BJᨨf*LF' )S*)V!ZIR]&kƦf`f j$ :`YTZNfW=„-HݿWZ&,C됔2vaӮOYMJÛTwLaa"&0YShDͻCsEïBlivoIxK<; 4Yxc*aV:$ Y2hOBUg? )erZϛ&UY:QS#J̱6gD'S'*)^4kZ)*!w&uľQ29{Yl6Z備9Nj,IZgY=⢗}HJ4ap1(liWBFL.9 /6Ic :`֞B6P=V \.%Ɇa*I%CLxe@l d9%P[0݊ NV` Bz`m^jvVi። <(EJ7&[zWfImenX"\Մ6)c#d5tą%DC5ehc]VПڊ0i8>|XaҼ>kQ Ki)b1TUdO6* ջZ(Z*aUS{wsO[՛LBWJBƊHa6&T2 dX$+#%Q꫐Q64~ ( _tkt;(&J  `WF"v99 kNq2M!,A,9j,bVAE4CDb ʮ˜L\Ρ80)8Й!Oź(5/,4a PthgJ-ܛϠdDC@(M VWp4kϒpH^P@dT$()%7fdW6;'d2ZëYQ@IFR˚IoK`OA:U KuZ4< gP(46je| $z aΡ ~,|:omi 3a\ ^b_"B1kP40_g)*SNNR!'_0.Ì \ o5]82(}r,}V2z(9@mi>j$\: p1[;঍5f5:/%ЃZRF>@H$5*ʈ2'Eaz ƒ͛|Oy}C%YZ{[{ǣf@܆`m]Kwϊd*P?<{U|qgD%lLe5Dw <}Xw?_.ŝGUD|ҷ{%}}6Fx2= RnbVnB%\ \˅"rjI7k)zc H=V nsp1v9 ѱc[x %@@!Mv%NK`fŌ&Xj5YҵD/x=IȒ,\\9vEQpYL$E}4" <@ {pw FD֨Ww 0$1dD NP4ey'tXgHgYtg& Hr[6jTJP?z+"t42߬t Hj/Vp7U )^`6M0ڽgWK/~?ד Dg nfV"i`(tmL" k໣ɠbjSѭ15 U^rc X߼UaӳY|YM7؊PDRe 4IUU;I Jexy apaO1'JuZ1Ԕ*NFB,LW=gU4+~(]:IB S;SvVMZifz@JQZ^3NFR!W0-L欭Oւ?!]Y%sP|P}uBL1 JH؝4z 6R zxX@¬Vʃ#-.&E=ʨs!1>?MH( ANr|d8Nr9Ρn3;SU8]C7_9eT$Ҭ{d\FCtN]Xz+9"Z`ΛY~ZOW4<􎊷!TQnԫ/-orxC.!1%.C7K_< Z R,!DٯV^>Hm?IkoخPε;]F=]F`eֆw z.#a[J ?a%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V*;uOsp7WyZe[@Q $@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X JR.wHJ %HQ`@V'Y @Z Ib%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=_%FAHSt0J XēWu++|&+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V)>^Z/ڹީ/^~7\kX_G@[%Ip HJ.Z O^+es.z2R[q@+ 䡤n4O=]u+qz x]m ؿ;u71S+0|h徭W#_ [oե x?c)IuD턽=>r~{y|26T%-ߏwL?ߞp10-P\7d8_z-\/ :}U} 5j4}% g'Q!{׺Ʊ_d~!8 @rbY`m5E2Rbɶz8$ȑEcCfwuwUW5_i\ hc^<)nqp F*&P!Y*]L^kί. Py'Z(Ҹ,q [jꏟxRph0sHg[E\DXQ+ۮ ]lzA'uDfhicaL {+[,Y{ɱ5Q3-9˜R9#v0"ȒK߷oAK.U-,D:JPk}S,%z4 UH=ue'u<}*'){<~D.'E]%j:uuT-Օ"LT~0:ONRE𝱙- YJ1xYL vk^] X>ɏ"eH{Xe^hfd7D:]|nubϠ\Q y,E60\<]gI4/?}=˹(H VJmFYV9gypуA՟L3njbO20׼kg}z??M߼=f0_.>g?G䗣揯vT@,rtT`1X u҄1`'[.(ΩKA\1Ɨè>zv\]pw^a92Rݳ۪>_$yq-b`ߊ-'ue\4vlPu>gµHz!^ S}d, 2, ]_2L tmV1_%-*EmɅ_|ތ֙6K4!pj@ka?ߞwL!Kڼ\gHOVϭp,_Mܵy뢱E efO6_I esQY|v,Q g}s*xԍf_h6t<>]ag R'0|vu:럆I. |p{ΥÏѧauU\F]9_ |LӢVQ9__O?-6P7PY5.ï*;^Y6R@`8wͻSEȔ ƹnO"F`3`>+ Nfyq30ʮ2u/Us)bhy_ţn%JnF Yb={='~J ӵ/@0X%fM Xܬ@BݬJAZ7fUUBX6 e5qyT=zJW 8oJho3;@f$r Ymuoq&#e)wA0B7pGx чm^M h9 VR @zF:a]⹢b-FýZ;5s4)w<"097x:[`]E@u[ֽ&+|}n;C.`C7(Kޕ0>W"4wjr:Daj^k6o^ #n鸂`T|- #ࣅ1uB51 o;lӂ{A+D퀸,}Ts< `)w!F0n $?j0S#B tFA5 ] nࢉ5.ؔ^Ǡo7Eq<]wta']fӎ_lޏKy[s$0 6#)7=)WBWB=V/8M}8dq/l*V1O\̑(rcjjLĶu:6Β]lV[C "Y E;f80,[>+8]:G'k?fP}'NF}T]DJIvϻnwCX0ɻaI>aF2dp,ma5w:R<*$Ǒk%ݙ?~?uf(`%!f!+:Ӌ=sӽfw>&+t⻡h?^.okBjta56kzQVc˩5G[ FЙP`U6i*ba5 }i)9ʝlg4 v<ĠEv>D3L;eL0IvәWQB@i݂JiI 1k7Y?/3a->BUKB) Ի4:ICu"߽!ԣPtθ~F\DpqHNt`x} (]ڼ&md>&M7Ĵfܰ?X:u&SqEYw^|"];nI` fYtI%@i0s[S e5E^=m=/ȗiyq/M|>/Mz!0o3*ς&˗]e|q2#9Ɉ0XzJ= Ӈ&OCzx湴1fy<6Sf@w{~i/n&͆i~4o|`{&LVJOe֡0zƓU 7cˊLOLPYk7{/h l1!|&*/Hׁ[vX4qgi _'"p|C-CϼR]۰X`O!YbJ*B"r8"} ҩf``[ u75?RJh 2P<ܽ?O_x.]F\&Zg< kd081SPI#51RlQz]<0~yz5O/[= {2wr4 O^Wr =} C)eҸXM@4UA %(=(~>\Ӷ9:YQEcgVrSR`m{ʡAO<9mѓHؖRN\XʢiPJqC)+E]8kخRSl=N;[B7zf!% 3B]0m}kzL{r^P2u'FxMS4 RXp'1!9׈P8 4"6xSPA֜GhMBjB"|ȡ'>Y JF=LRREPHF( ~1JXǝ3[hN<`,sHGcrtC}NY6Vg토JVTY?ץrHr66bRV.P癩6BrL3LD"K^j61ڕ eCdhʍ&ȨA0嵑0U3/ҥ 2viT hRb&yRFҎ1Rʃp-a#^jνz4*m=[jSroE[,k-l:wrpǑ:Zq҆BRl5uLa' knAcnPԒHC`++275nDCG4M#E `TD2'Tfi@ $lxU J@0COS0zm$C!J Q XDQȐq@9 ŒAaneaζ5^& )a &1!Pl*c-'*JÀ_ZAjVY]oGWt{H]n]3fC0)+ZhL5%aUտ![|;eg{+^ @i@X uQšD.T~bCQ- NZQa*ڌB~iS^ĴvqE9%1bCבakj 3飸SlCf#n/aZy@wmc{"N}S&HS{*7L9.xZ)9[㳵!vF ΫpiTûD?pІ˫]wDRM T4{`ސ %OxtouSg.qztXٗN8 .C]upأM1]L2k;uADP%,FTkVsK\5I8'~$/KC"bȸ${tLM\QOU'B3 |"aDN\̈́HeQlRN;ϓ\xJ<䒛ZDۤsTIEX*] pCe)tLhOwS8im/yHPC>}PEC:#Zzl6o}ːJ笋e^\V)l\`e'+k{tZ"{g`-U\µD<+$hBCN DDE"C]d(B:L{N@mb%25ZN,-mm"0B,6/Gf43\N\avj|h6;U .x2zYpR/"!)DpD/Z r`[TI WDrY##i(Yښ"Q'MтR /EJΐPl89x%x4N:g-}u|mrN*^|%R1fy;Ϻ4GG0m^]sAx*5ǣ ׊ %9 C(J XatQ^<2'$q'JQQKaE !*;U6F)D; &s8@ 6 :؄Y#̉ bQsd56Cv&`ٟalEBW|<۵ս>|%mb<حqm O-]OOݻL>צgvơs϶m:zǧ8 Sx/uuzsxK6n\ͶClYczJwiy?Χvyfzfn6fשo&()9AN{~e\hj5MpdmS\ `Kpa?237g̉⢣O],jc f;8U UI~t3gLϚ,=,ͥ+E}1V!$ˬd&&5",L 88…0D<QȉRRj O2,(ibFKZB (Csҵo2>. |rLk҉"ڗM.5\XmXG:.d.OQӺ5`kOPJޫ*@gQ ")O1`5P |D+ xOdZsD Fn'zBBl :%nU $ۄBd,6풱V)& Qƾi,<,|1xM9/c?PX]xKl@J YNXJYLU@uAH=0BLqg%9hL*/99(jJmE|L+]:pKl7l‚y,RuRG<̧$0̃0QL! M$; Ri}nXY?ԦBvZG !pԢ= eMKTuvѥ1Gaʖ*6Uwq4 1 cc_H KDK^"n"# rb aFLJpSLKCD+Tu(8)<(AGMrŬVvx!V;'1:Qɾr\D !sNZ)q¸MBi$%"r%MW5 9!rrXa1W' k}_T&'>uPVʊ{?.:⨞vxyy> 2 (fkK( ֔ZH"B*[y8'y#u(9 o2BNgdr I]"ɻ($גBE)BI\PxJ&k^PX'V6eI&Xģrnc(הq܀@RQk"r:4hacP:eUp?w3\#-.BY69ۡڄ7iQu9P66%׎b[`f) PlaY^c"o~G][uZwZu:7._?1B+]C&jx?~ss~w{Ks 5y\e8|Jg+@G~~oca|ZAdA+9\86jKMcIqԍGi^ RQ. TO6ln?\( ev5ՎQ[K ToшCzVopSe(6m#P9b!c  ,xb҇UL$ ۄ|p<\0*663,KJ-6*l7wkR6w,Re lsἔGG_کmL/Yc-;4gc2m"GMT"$f)5 uDQ=a)zxwϦimkl1j?AO4o^Ia=H{ރ =H{/ =H{N=H{ރ =H{Ҟzރk0x zރ =H{ރTρT $izރ =H{ރ =H{ރKo$[׿ϋYQ?G!@˃ZRZ"_8i}sҹ&oN?Ϸ w7`y"_+ގ󆻐?kÎvf|(k'ÙzogN;䛩>Z+>#p.: ?[šӦOUm;~nB=Z$heT=z=jU]O~j=^0ۨ٧U 35N PuGU ^mo^N+ax;5+(G'?z-4Fzd"<[d{~Gd8:[+X~;l[Ffk(2*8-y՝grV:(Ԕ&Stdʹ0DQmz?)P^7ZqD@T;σePzaw{ɳNʗۻeCtmXm׼KC_n/:NF~A-Avtۛ5 _tkgSYW{Z/*g;(pQ!qZ՚PsCB풎5H,'3E!B9cu{Z_8B@O^c)ctQEJ@,&Z/\X읷Nd-44(oٻ8rWt/-Q$$ؽ]ffs?d xfd俧HF#4ۆmy]ݬ:z)sxf)m%z՜tjU HcleN(+Vh01 DbWB3%lVTFRn.mJHZ rEuZI75g1{*ֈ&by}9),+m^!vf'_*(>st5Q۹3*Gu1[JZK} H& m0%3Xn3]8_vv(DUYQR~&JR}y1".Z6K;΂(A;ZL U@y^h*A`A"F"xΚv?_z~|i~ x^[QVPTEcFa"}Q2 k񡖍!&hi8/m*^XWΓ6a_{Nl@,Тv \.2JdeV3=Rm˥A!:+&WL=̨ۈEXbT29Gb_a7Ӊa3fٖ۰=π7YZ $'B(iJG2Ght"9g",UB>V򃵵lWԷޛTd[s͍oH>y̏@"It&rZmX(y!ɤ !hgK2Wiȶց դ.)7SnQd+{,_MJDMEY$$ &ϔl h_cU[%())0w BQ?$4o (~P #Hkޒ>8? dS0'hJ0Q9R"kp@G Z%{[׬k&ΆIK4ݧf_dk#+^t(1!D٣_lmZFP&tƛY w{lb˼Lc6')uIP9H8<2pl"Ym8++1;.epm:;8Q $o%@hh`/!W3#ϢgiZ6z/kp(M(%$ 2O KNil1DV kDA2 !mU%*W.ԷS`233PXDv%ZOq,Ts;phŪ77I 㜜M.n4q&'g5%lRUox/^~בoF2_ty34Ŭ徲dvLՙ{UۿW]77E7^Xʎ U<&MJ65R61% "A ٿ4% y>tsEYz'f6i΂MBPj67 6U>^3J 9܁7]nq|g'@6e>A [0^[V;\e3@B? _ S^?krUQT:otz~qs:dg˚k8qS>ߟq⎥]Ԃ{^/*+dS,Y(RbT_L&y WGW}M:.%FHGB>vOruE})Ẕdž9}OolR ,&jqt4$ߙ^]MfN~WcVp=F-E<ȸ|9/zJ-u,PoY*J0s-XĞ}~~^+l,j_N>\OGG٢L2pI^VvbY~ QaVQ%L=jߔx&l5srWvOI{fpVnJ<y\X*o L,j9o))KOB_/F+(b 25N"P}x`>uOKƖ.(0]$ ]` 1ZcZ,BJmWFw\AE$/>(S_ $iQEAWX"DӶȢ3;vQŹ?R(y}Uwkׂ_ͤ]1?y)]3{-nq?SE{;8On@Dsv&Aq}/ (l?XcD'Rgn;QRnVoYzb;ͺ,}fLpTALk0+Mٽ6F*MHʢ+-PbM(="NNfkG~[J`>?f_S{`9 $<.akaxQC@SuLӏ reK"4{v&DUr !+@w|OV=To}Q`:y;j}u h)C5]dY _DcV@MAw֗yp7~Q }O B'ywor{X~yu? ߶;3`ꆒ} 5ʉ`QD "T]޷Q0|K% 41?P 6ĠdFbC41yE8PKh ;klsviR*~0O6>Z|ۤv O &2gZ{«4( x/=kg@ yLUrR dWH2ݑIZvXB j[kWD TҸngFo#6B(pHe:DmC B [ %|0j#G9X}3z{d^klvSт8T2tP"E=x|wm$IU ݻSRx L740nCSb&9<|/O6UCɢDJ bVTfDdD_!pɀVu0*`ذ/fM4ٮVzm횜jNTvU5I״K0n8{Y1ྂqKPA"|9X!Fwff 9.=srėz:h;LƉ-o6@ o/j"yV$ -h+THPxG: e*R_,P5TRrY/ fIVL{QF:9f94h"BBxL%phz!@ 5PB>BAUsA' fc]"2Ku_v*L3w~\v]*_I{3| l1;_ [d:\u:fDgw gfX49^BHy$"kYl*{;4igMӾ?Bm2l^Jk!qoOaK,ΙJO9+U`9ťkYj|;tD;t0 dp>j鄶с;"!xJ4nxZ²XZ`B<UX[30{ RM@ͭc2.}cȄaf\Qof{!ljݸEN勣Q mf74]7=N6+ti[B$LCg\Lqzrٙtw^ v$yƾ]nv]iyqs#\)oovu3CC[c:+ jy*JOM'$.⦓K-GꦓuOM pӘJ3|2dIZqQeR]=Aw%5jN+XӉJ2t*I۳IO])3UOcs< ~O?v*嬃Y35jY7O˺yZӲn Գenp/fb9C\ 3&t(5I%j{nVFVcȃBFFC${A D9vnNGlO2,E0Yѩ%bᵞ%rڞ9ڋ<L0:r/ind97DϬ<[j2;XSٺ(=˖cϟ[];~yGjAI%tE i:Yy0O[ԧuMd۳roȌuٸ_b~1f >ۄ5n1Ћ:agZ'ޤ{@Q6Ɠ^r&'kt xrJ͙"<1PaL}x-8w6߮I87qi~WޮTS`ڝ`ྲྀ`M P0v(y3t@Hm0A^ )D.*5Xq}&hۃ =_o6@xv|* |DB} ʇ,n~:Ȩ/M?دxO*^*ݞ\Nn\ݸv_ިa9m_Kr6PC-M%fd`szH vZ+Iocfb¬4 ה6n|pCO׳Q#f߸~,]mˡ3|=PMޮ[Fyl'Qa9nFrt}v@;<ۣ-##dw~u؃9du> LjgNH(g)<;zqbݯG`UYL qoly5'}Zˮ[rW ¢'ORy*`$-gOR,IUKm:>Kmw MK<6ǜZ*حnIu<&(+ScwөּX1ED$MbH3/u KLI' ]S:Wytis3b\P meWRtxRX[I $5=3ZwSr20#ğFG+hDpXi+d^`fQ;3(* %B#0`NȈ& |AYpDRQ 6Pd4mgK;+#0ѻ5 Ö1r"P$ReEJk ֑`\0cT3iMIXr[,\ԄۘtG>9Q*ҠJ b4bmj5jIԟ6Oz~wbՕh`k椣^3fTG lDea,aS0:F8L=F-r|n^thj%Y?,5puo!"5h wc"s89) c\`[kknEdB+0c1TdW oia%)"(RI}[Dq\Ik#\ ~0JXǝ3[h<ؼ1XGcuC=E8MD-yq˼<ԻcͷKk1tE s3@B{l&ɒfԄd 6@!H9GX9CD&"UjҘ|#Rs^B3rcdT Hp~J@2cǚJ!p) *ÊG-0cE"CƁ4` #j #7nXVD;n폵4t.n )yJ0B#f7:<A MQe,YEbK;HYsT-y:\9ai9m^yg= PC+ wq춊mneݹ~.9"壆" c^ ʶIR_F/ֻmӛl\EzхG]12d<_j:@vMTyׯ23ٛpSf̮'ݺ`epo󪟎K߽? D)Tr2.J r/WKhVaev>j[HrG)7,+B ;Cxj P11.=~,7]tp.̩[]~k~mdxhq^f${ዲ^7[#eol*N6Eb]I^zdKnuRP똢vx)f]wPo%XXiC(XK ȁU2ى0U//㻱0LII'\*{|ߪz;x6,ЪVFVՊPn~~_i3] ^WފDk3M$x`F~#i0%iå7t\w+7xI_V4 |7ݯݫ 5oBV4#l,>pNvw F./9/C@- ~i V!ϝ VkSMyL"L183Ԃ4Lc  58KZh{@FkzSDHfY5I<1[Ο|C( CO4`V7]`w?ȹ庮]y($ \rmP&qx$!VQA~0-A#|`G*`N%0 ~deVX)8RH O'|Ih}ތ._UNWw6K cEus{jZ)iآ'BGLJGG~Tt ɛ_}[!lhosˍD.4Q֠<O>͓ԹE5b'"=Jѡ#NX)Gx(X-L^1{!viF!JqTQܦUIGEdQ0[R.Aj-B"nL,t~B<5(xdX'm Z*ڽj]ΏG 1b ɰ~BM 6) ߂Mt`(-ee:  O] R~ N* Prj V@xHǔgO(~хN1L1mU(}z1?hM8-U_iry+ZcNU$0 6#Z<#P:dz/^BmV܎N4n6ޔd9\$dR_:mIu|;a[ /8 I1ҍprep/0Y07aܽP,NY17I(YM[qt#xSaZzص讌fxۓhr(mrQ6Q1:ZTK0Q~ f֏_k"WÊ2zP+| Ef1kofWsq姻fJOLwjmCD2b+Ǒ<[c9r}$h dM5R,.oT5gl1\8e薲+۞2V.N}f>e2Lmrretl)!HZn!J"Wm8ݸa.Q'/A- RdL$#%|~esҫcbJݸ?pfxtd._޵Is1ߎ_Y52=zB=j3: QuO\FpߛINtz!9v3|%gSxЄ >wW蹩' eI<3*Ez: xgS1997 ٿ4^K2Wv]=y$NMsìuJ`Dh6or7u r;˶\ʟi}KN%/ᄆy_}K_T^"%-RÇ Z$LyθuD,=%Da|Þ&|Czx91yb}N;[o lGz ݮ׿`ybhPLB;ݽCQϻmN Loky;nxbM -Tگ0v &ocBRE&(H;CsFe֝fwWMl|n#jww/CXs -iHuqKc ms&] Z$s9TXpGD6/-y"裥Ч5LyY? HxZfTuPvN _xV|=<R nWCCV <#-Cȿ鷾#d֡tW<7sJ׈y[>ϫH0KIr˙r.W".γU=&ٳٝ3: yOζlxmDY.no1faB)wE!CԖisxIX@8O+ʀ5F&00iPK *]Phؼ&vy7}&rTQ\L53:}͋Aϟi8/=i0OUb2i.b`~>Mt@> }JG)lZD/RL!'Jd|H̪@5@ Hq,s dur!?!x)CkR3z'HdL0!+)*S ƠL䒵RDQr2U}*2Psakr30ѩ`MH,0hstNqꜨݚs{X2|!u}.G_#_PV٣յ1j'S&뒣sJ*D]՜kmN%۴)\mP2׊L hm1Tح9wXJӍbX/,rvޘa;J,>.Z:` ǧn[K]El’Q1t嚲K[ 9iDA+ 6bD @.ʙ3>*b_試RUrtWsnGxz/ݢc`OR6XIsŹ$-& XI%CwՒ VT 5J8\T")(b[fqqo܎ˢ~" bMgD4"RIj\Tv#+-V6ekMRWZž )9ڸX+9 ١ gbP!pJRItdb[s0MGC%"vE8p`-Ya@C#l%(d_ڒVI%}amgj-n|q[ۈʾOb.av/MoZ ~_o/7'<||~Xrُ<[A`\Ѭ ?m~?h}<=_4i .gko/oGcko?퇿~o?퇿 %Z6~o?퇿p~d~o?rmoc8I0DŽo?퇿~P{Q*V|Xq)#'d/OO?ޫKx/MɩkgY4Р 2ېTu9ѻw5̂_z3ؿj?zVфkĹT1kZ}rʑ+eHAĊx``DZ'y!)CtutĐSNǀ͜ר PkhfܕPuBFa/^^0qt(#$I. d$m-d$ EJ,lj kY] {7K+琻+W|$!n`P€WXQkxk>p|8{戭[jr6KaIڨtVrMJ4hCE k|!F&KrϮJmW#52zTW՜;K%桠v=CT C@ao}qe48 ~ˆI))B0lRd}wI(0CRE'(N&HJFnF0J67,.."֜pYoMEyMgD4"RIj\Tv#+-V6ekMRWZž s[p(qWrCScBAᔤ6 8[ݧ[s0MGC%"vE8p`-Ya@C#l%(d_^VI%}amgNr9C.Vn|ҿOL=rg;#1-C`'! 8=.66"քO[z݁uZh:aR~6Jy]2e =eK_g9D]R|ajX8z1,pb@4ʟ{u\AF0l0b)QoYdy#=>ˮr nWȜLsBlKyL&3\)P3Cd@rJvdn݌x< yR#d.5 )6ꢴfhU,\wͳjgWH27,XPJZFŕKqL suklW<4[UvZCDS#?bUTjtTdt<ƤH)ӏRq*As…%rPudfuTJgGukҲ\3D⍐G/ 3C!-%&m#8Ǣ;ubbuYs݋΀M # ^MNk2Z bUiBU*9gcYIJG˃>n!kwH޵q$WW_{8`'~8W1EҎ '!%$%QCdž%S]UUuuF:Y!ZD"NL 3ʹWVK6I'Tz8dW-K2/6gg&jIZ>5btcPJ$r(< 6y`IýrhX/#u@Sle/s\(L} @1b Jb&ęQrUQ SpXƞJ¡B)bc$i9c"0 2!)N9<ŰЦbUFxô,hgQ"$`dgxek-QTdbx5^/+64h3 C`Tq҄[ @, l% ul8v4-։cA  `%I; cGar;nV.;|^1 [eTdL[l!N39 RSw sH)<9i"N=3˽ .w( 1k K>?*ĭ~lN³Hie_N||R\f7NX^񫢌0;y&~ڭ>\ܳYta|_{塨SaR×&Cd7TWOP@Yq<r,l@{rBW v0'L={e ^vA0LLIG,\SX.4,#cLP+ד J, 0}ڰiq8ȓdbd+|x9zljQl2-Z5 mk&32])_NuV˴MϳWwE}ŽesVI|;s=$AnFޢ,8;pťuFkL3:+~zcqZ֚7h<Ոm[GUex#\49xe!A8a8`}tp5M b Nom/&! ,fazGe F7 @4: S9[q6T<^k:Hp-b^nrGa3ʬX GTbfKFTPbMbf'?ogjiv%衯cvrM+Q$(.R@eÏs[{FxQ$weEɷ_gnG.}1/MTMJǟDRUFRxC.޵`qiHD #F}J 8m[jycp)m˞k%6LhijC6 r UJ6ta/!%+K-P!iZB" "7%rE Rm4z lH%+6TκZl),"<2rx$XsIjO\q2F `xc&tt )vh2=٨z:iv+'Ȉ " q>߯S2:E'ρiCc4 NU{`.m)GO^ZdØr&HC :&EĽXU.nA:oM0 }%b2;-@hDPYFk#&N?q5eiUX崌)vB'03iwP;hk[/6=7ɍ:l))E# ZUeg"gf1wM;U~b*yӢW0Ύ$7&:t yR0e (JI@3?2/ &([PPi޹JI2dUx^S +E}h2-3z߻:Xfc^c?ac (:Gœ|7"Trꤟ`P? x~ΡDs%8ZS+܊y?0eON,mI:oS^1>~7ښPOuxeff3?cyW.٬ϋhIdr6W6)W|=TcI/F9|i՗lsVAq<Ob~o{yh˙BI~Y\ȸ݀|paWQ;EO .GV DF&= 7lc&%d|`9Xv蹖2@`6f8jhqsi6xo%fuY(%7eJ Z\+MižDIUc>_J ب:N+NاҖ\ a9Jf'BG69o9%%L!hM4 "ٔƮSz(́>o씥s t.)ABu!'20J9.4`X`~uD_^{ .owA BmmmU9G2&>T&jv&()9%;N*JA&x9B$`Et@v?@N D@ ;0]HE"HI(d6qR4ݭ.O--3bS9<\P`[Ơ19Z@heiuIwaN֨ڠO@?xVE6Q)yxٮdRt{-I*~KV9;Ku?fi3U=Sbxyfuݪ$MS.[d&][.تKVl| ;qm9=dfFX0g6g^gա >^I8;8ߘQö55]* Og< 1uf>)oφ00 {ʷ@$@ˆ1F"cCBٹeST:kaľLQ#-wx#q2 80DL,IE)|Θ ,xGT:^6*ZNԺ? mur:WIBl*b15 㰘BD!0QQ, \iAwiyf8z?p(a t[vwpӇp`0i+k>z65 x#!-A){CZ h( fo/ -saYx#Z@]"E%( 'A ^ I<lp)+"p2Vk9Em\kmH֯2П`3d/s:lcXG*sM ~ɶzfH(P#it*Uv>7WQsm Wv9blĊ+x'FˬRR1X4y0#.R!(ҳfƽ7XnI2꤆]؊m`dz|ߝ7DI+[m5:z$t$z БUZ P,Q*o~ @ oss#a9MBv{JN}%-JN[qH'@9wC j%x-ȰSR^[}c;MdoTG[_|$ hʇJ*qKqllgBCmɎ?zz)O'5p2܅iø$$3Ú?"ZF8 2KqQjE74t.ICI`یhI4J2slcfB "t/_=)N19<D~摤ܻ\Kr1;>kRB{'X΂aV:2iHx9GŬdr—\} vg˥/vP~1I>Ox$y_}oeP"arxqr#9IJ)!  {grsisc̵4+;0zv[w)c ۪ l_ܒ?#UgAqAY}hK}|2z鶈˯v zkN&ӿ?" 'Jҷ0p 0;~R|j/Ãg]Vf20@&^֮BQ7;4wpOdl7& tng Pn^⾣3hw޺SMTQv#sqˍne;ͺp֜&&Wh_Ti衽; )6X  gK4QLIEHDAsB:GKVNeHFjhK/@pIvT '֝OfRN)E6EdJmu^/B鋮] t6wpW{|]l߱"tWL@YItn3rOs&6%/# ?V*,F~_ $]# AUA6۳kr?Gꘅa*}I#zq DxP+v*c򣒋W|TrKC&ofIWYG:e&00c'_5z9 &U'{/T08+|vq΋{Y]4D"_NaM6튴5%l^ :|HB3 ovQs&Sq{xQ;FQ-S`]r%-OBZ(mE&kM w@ 55eQt46([ʕ" 8k笣!pp?eqc*h%{gqoR0#4OAvƆ/AR/LUϧK7l:/o)Pܶ=N3~:;TV|š `.-ALKy_ Cb«u,iʞ%* OXR+ ouVL _8*QJWSgMba(яg#g7S|bG.g8~ }XjIE%Ckg%eQ,%`@e]:9+<DfG }S8Lv$ S*AvM7F\G5ȫaf .I͒K =O.k=>*}\ndUˉZ^ͪ"? Jٓg?dq(2h{zeS7f*>&oA/ϖ.>cԯ|>ga\(ـJ`.bf?۬˗FQ|b4W*N-LzsaVl :WD+ݒnp.5`.O.?(o `/Z3Omq“c8)AD"0 P0ӌ[4R@4jte8&Lql $-td ႉh m!M$.3ˌ3x-J4F^C[P2akD`Sݷ(z^jJmsjӥ^排Q\uSF=ꕣ)4+RZI ֒M`;px'Inrk6&29? q+ VLDe)m JB=!,.Ј HWQ 6Pd4mX#gG9+@̖1շ0jJh-%*++"T!A6j# .6aƨf!҉@"sZQpQnA >)2 RP4)ɹmZn>j3ݭM4`.98rSMݏB؊%OU${ݟΏ-=zG_.#_u$'b\'KZ4yW 1hXRhU j鄶с;"!ޕƑ$׿R x;Ey5 f ks OV7IfݑU)6*Ub-늼^C#P H!JOUy:Q@.&Ebh18o3Z"!UE$SEc9 eؚ8{>56w(WuE&E{ C51J%m;u/Q6QU w1wT6.W[}\Βw|PyNZf"=?G1>epe|wYb*Yn/ V>Խ_y}sso޸0Nx2|nt3 sՏxq:n]koCjj.u_LMm?K-u RSYCts i9(K"/yi/qVx"޷%,x#K5/W:XI'{+A]G)|$]䖵)L $sDg* 9E%MPԽSVrkYpa=76jIr%H@xFJ)h?G.ШmktF"@U:--ixi%wM4q-:xc =v>Ah<5\vS %C#V A,x'>NޗO9 /sg9|>mya24⯟e/hnw: d/Ƽ ; $h(Y4n(jBmzv7ѺX>[@R"6(oɫk7V6ɣF{uWQe{1|Dx+~7lo 8Cqc()-)D(1vTBm<}B`Ѩxq\ 2&RK3[JJ¿aPHT=5yj#% c(PƭM EE C+.0[QKHc@3CM.^L l.,Yַњ8xM<4䱱7[\^'7en vOfR8q>D8gOg8#quJY\)J8jFpcIj\B5Q 6jX,ކ~ kw/T[m޲mrboixA{NМj4گ3eVrPB%觳G da R5 gտTWgo "zq:|=o8?|0&.'|{?Tf!oùhcDv mnRm6;y-,-xmmQŝ†o럪_Uˢ~d@-We|fq=ko4ۮ#M0gbbkXw^l,>Og+Q"<+og9* 5PDpJ[^Gw* 8 ) ['G0JNOaإ,;e9<_O"[E݆ze sH*)Tb`jd1*  }с(W2=NVQjBH-,nWe=/$d[ƠSbTrKFٍqְq,vy߀-33rJS_3_7Pw~}Ci|#6EJV"JKIqF.5P|cQa)RZJ`y솬3_,&K/LP u;Mb#DmghK݈bv[j xܧ$0Sm90Q/4ӒT(BӸv]MRUiA5ddP<*Ě2\RH"Y:*vgkJ_Md` "6?ED2"D|P $( NŹRS"|Іm3)yRLBDvOu(S<(`) k2Jxޣ&@q(x|YZg!.Η~dlm2.\|hk+)`PZ)\VS$#"shsd&JH.>.v[h8<|kB/9 ֺͨc42FΦq /^ADzaA &R3DhUF TWORt(hIlQP!I%B\![>jBFL"ă>vdJ8qq~3$vz"%ܕ`, )l FV:\A'I,A>b;YHrY)<1ࠋ`GRf8Qz'IۛNY\)Kew4SҘ! L廊BP`eo \ubW"R=,p JWYZ],\@2. q?N'y0qԧf(wGΫ#NT> **g٥7G1gş~\v<vv lRCeiO)w^b{//7ubvSͷ|"t[JЇ>sMΐh׹_Żh_*-P}ۨG(hٽO5e>F&QNt݂+Ҏ=]r 40!,G3@X U& lTox˟0iY՜,%oxAzW($/peptz9p&\U6N,^$\iAɈV99KM*Kay(;ֵZQG(ރкC$/7NV=Nh;gzTZdžк]\UP,-! isŪ&eWOWLf},a+אް,mUW/FUWJK):\e)UN[dK*uB))Ջ!:Z G@ Mqu;ޔ|;]l#J5kNWR{ԍ1Ca>o;haN0x{ZGdⱛH+bW!N˲W q^1R8+a<9.|ewVW ߴחaU\͂ǩϴ+u$ u.,L~=__P n?Ŋod李n?M73yQ4o&B;~-?,˭wqoC~D;u9{ʱX?wıOxwެX 7;uuͷr:0.I=B:.IP5y>Ipj%G\vm?g=efy,CIisx]Ks9+=OYH<؝i;6f/3 XlӤe7D|5 J]-WP 2}D?AD@4!(DWE-00I<""ct#uB)Zk_YwY-4~ֆs+dof<Ԗax:~aOeOo?i4bta_f aH!^%n٤ <2fk["۫w)FhM'O>*>L71LS)>ҷ'p7nFot{&@n˖0꫿0\qb&nV) a}v<[:F"f.u?ͮU7WGng)(?pV\W`643ih6v'Mbx(_e4Cܲ% |ȳ{fWsU g/Ip7n4w@'W[A!,-zv}4A.8B.d;m5S)B_&qp?# XBS= 꽖,IBf/,SpZ!YIJRNd`#cV!J a:mEFʡ(bs- APe@̶0"|cLq&1K4X=Y X IRk\ݢ;G>xޜW%;.-Goj~*|%Y9浒CUouwnjyFۇz.+?P>c ijV/2%毊v/jLY%]8{Wc W?j\Xgqq%Z5s(}2[Vxz7}/wk삐5$1wBw . cslFKf,sA!}o۫s٦b#"Ygc5ZɤPX"9-Sk"!dj@QHܝQ 3IhB& פ[sl᧚ܶa^ jO/YZWI[Jv-Z/oͷb>&~øTA哴!,)JL )f!_JAo"  J",Ԡ4A,㯦UDt 0k,w Y=tc]i&Αo!H))dAR_q^y3 H1ֳ>E%q0҉&|mneBw!ULjĢ6eraL%$(%3 JL<+9@%z5]yY:~raP 8- ZVbu%:goɃ aX=>O.غINw.U{A;܁PgۮBv  Ppӝˠ'X`v{i4ܖc)hT?$8]D?Uvuk4OqZ N /DBH ; Y@bvZo)zOV )J:MGYq'ۅmq ̯/(͟-`zmϿ?nZ™T^jm]͸\?]O|;ynO6=05\Ve}psmQ} ?k:l՘U? ۯ O%}]~kj ->ԁMkH5 Arjw?[lr1ZPXtJ΂PJٗʅd_:}\H%m= RAT!_pP< "B$)Rup(EPMQz렊FIk E0`td Bj 8}Ul')uIѿ.w #c9x46)#WrHN,VҖl%qzb٠*uX+DzmK#,]/[3^oE{cR6Pقo{~JFi̻;-uη,vNwC=i@l,H]JS.XNI^Mpq`/hӧM#7S DM1hZOzI.EVJPM7}0(DDyDGdPh0멻s g]@;Dk IxIA`WTYZ :1@SEZ8$FS6nF9)52 QsScEm&aE'd Kϝ @^dlmzQ7kUw;.|y| *|lI0JC^S\Q;O$d $0ٸlJIhKShGXD6lzrš - \& VCa#26爭4ͰJ3[ldj -\s| Op{.n`0M;Q p0WV 1 E A@$2̱h5&P`mTEs7TWQTRj6uj~39v}̑%"ևŎQKFGo{&VW 0(IysUoP-dȺjv[u6!32,:P2eĚN !-afZ'(o&a{8'j/"6ZD""q߈g^ƜO# T4ADG/%yt%{$,ll=RXgb(rZjg"ń:l)1ک@zhcL#Yn3vq}wIfTE.Jb:NJRcd 4\H 5vvRaq=4>5L&DW~4^K yϳ y"uPDwv:x@!PZ>%u =>v7ͷoWCpaoom_l{JHilHv:ZFEKKݡZ2%eХI."M2=P׷6Mz X+dfIcN$ Y2̘ ISf..:XG X v0kx|']_;Ĭݶ페{ZPϣQo|k6o|*{3[ϷR*۟oom͙~v"ced(wN{bu[.]R*#:)4vv N: =)=O2)/;I5I)IO|QKg.Z$9R6E2AI2CDb޷M B)LZ:mEFʁ 2L-m Z*I`E592$d2Hyz.kVBZ_p;0n 2eBel.HIk.ȪFL҃ ;ST@&${D]Ca.ӕy=P_;e]ԧ;?M2w[zWڜZse[}Ŀ<&Lmi\vqtHbXH#?SUZY=EUUMUzF2λz.Ʈ=+Vb+]rN͏e-1yQi'`/4 )TބVLFd)Z A'SǐuN}0Da@yٞ- lp=6)t_p#JI.i%:5uL ` }sWrVNsWf@*&YY16BÏhRA&WOm#_Xa!`LT6۠FLV{[sj[K+~C~nڳ?{MyT:ݵݻ"& Zz_| |f_ALW_ 272S|}nKH.NCgdߡ>zgGhw߄v,DofNS;iq[Z@8F-"-e MkrdEx\$10h 7EEr"q%ZF*1z!w;5 ?ufq6ЋxcZk< Kë]Ùռ(w/YWl73!:f[4T#Nuj"LJ\T"G'ɺHdUxN&J5vo~ ZucI6P}0{ @Nrp'?lA5E2y}})J%JI3Ȱ.$V`:#Z&4Nzƺ5'uTXkۨ9׌3e ({J0wc4MX2\` W!, &Q'N YcpԳJ PJP *t X[d$27^2rCTl͌zae,^T_D}O?SegmK O"}3&dQ (ј쉣Xj֖LӍĝ2Mg&3Mo,f i,RgaAS* !Ji >I%#y&$h-}` L׶<"m5O%nMYDʧV>]+lk#t1dHd]̥gx7Y Eգ9c*ݘU;Lat5 pIY]Rn8xvtoIeH`& 6y`Nwg5l4?gS((M2eJp/TJ(ۢ9B2>eY_΂[0: GIRDfV("aeDV)X3`05u)QHBC  g<2&29Z.TnٴM.Vd7G{Bnέ%Rifn/6< R' qO9"dV7LpYl.Mp-(~"&J < " Ndc ]F$@h5@F lc%PV&ird.e}AN 1a|Ffdc .J(u ʥYˆ{ڋ$PSAC!`(yD"ΦD$*id38b$pnWfN!cpl;e 4('pNI}#X,YLLI6\1)I?o[@e(_g `Mx+}],8/P4IG/ַ'SLvLuׯ 7ŻJ=meP(ͱjApO᜝ԯEd6^qQ֛˪"ڶWq\ufjrC;4OWyEH@ :zYYmg7J~:um?8[\$`Xđ0+^ji'),`o%G^#ٕ8,$[`b@ , T+)X3?8X,a;-6?"9XO] $yU^@IWR[#t(Ԣt<&J(:SA\%(G(3WM8+R/P:ʛ?vMq{\~G_9][-ߢ7TvpGu37ź3 ̌-(Qc5Qq(Õ m8JhHE oX[`kޥ4W=oo[쟻\k]T[o~#7`ZH`Szi|SlM QTD2#RL鍷^mo xKa w0F)Âg*R&<F4aG*Aaw67lnU<^c2؁KQW,xlj'Vw3]~W D 6d"S AdqWB #)>).7~{`OPj- xb\Oi'#>{ךL35{)Q7@up+`[5JTD٨hoNj a7ԢcD5|ޟ" Yz C1g1Vv{葬#Á;T+j:S>@݆ "Zm^w#µOp-nJb.NtfXE4Xyd6R$JI(`蹬힒fv'&uIifV$͎@|\Ģ4*zP^KsNF .W1.f\Kd)=6c~2ݩ2ÀwVtUܼ./0tgӻfqzBvQu~X3Vda8((t$Ȉ0K-e3%+FўJb,;A]hJ|=\=ig1, խUW\ j~DnY<4bZ|ģ$ Ogp'̐$C:sFiIPsnS0K'4=4CfROQC+NxSƩr;>kUl.qޅ*`:?͆u/ '?WD}|Y{axfAt IZ nsZ_7fߣ2侠dd _U ^O7erKx!/3Kj)` s2Y&J otb*hPH4k 6&կ\-(:" ͊&T6QdŋOH-cP JABi&xX|W}~^N0|iϻ?WfX}bW|x os;~~c:b._x?+u_}^d'ʕ_BP:-Yɔ:r -vTլ] YF})lΥ c"7Xʱ7֧QCW,nXko2nwyW&3.։bsOI5j2խyLtLȷ ^^^Ϲ_:\6YkI70lpZ!AcM=T_uxoO:\S޺E4Sk11 \knq74q" &&W^63'cx[s7{ 8U4r I)=VkF2 4)cT*HK|L%{wq/ZĿh+xZQnYҤW.HQ3ufR,2˒J".gNIC lUdNyP/]6)",)-)+J2v@Zn?}DjڥE0֪34p MZZOӈR MKn:r4xT^,$& jQ( Gb:I$?[Aw74Ꚕj9~3R 0&_GNmX5_w o:E ~߼xF<%9Q 'ϥɔYs9ǻs"h١[.:3"xenWQމOh SF:CWWt-ߢG]JT+ ? 'azv60j@YZN JXgZ ͳWTed+ևHwaԣC\u奆'0sVǜpobfwG V8eq{'.Z_6,SX<ȳgW~Y @HGTy#V*K=s-G4$*8U{|ugAz>Jd_#w=IR)[PƬOJ~k6{N?b|P|f=fӸ;vQ.ZSV+ڳk5;h\<ܧVsV&%NW:JzsyN[ZxRkںnv[sxz^9/(V~Z]îu옹Ev |v ^9~Q5fcm o|Kۀ^{aW/OńI}Fg}˹2rdWLsoY:1&8EEW:(t%hyJPe1Uq"\Һa pfF7{hnYtv4}8U O$Az>I(] tzx MZNW@BWHW)=] `CЕe5 ] o}JP7 IǮ?] ʹ /t$ti+ǽ푮.QJӕnchF,CpF(e Ta]'҂`6+:TGK# "Y E7Zs ҅E7np(+0t%p02@Kzs^BWGHW> ȁg8tQON͝x^*DcJG7 ]HQWgb4΍4vǙOzV*S;] J(UTEǮ&6Џ'@SSPr3+ ]=L5bp`c0tL~*8Ff+l v 2ʩoޝQJ;] :Bb ipQWyt%(m\*xkӊD3\uJ 'oOknK(wдzZӴۡ M? Mt]֘f~8vaΓ;n E54QC,Kb8'M2ov>{h@zLC+d+O EWaJz5 ] 0{JVUEH# `?*5aV!hNW@i6̺Uƨ `+Y.hՕ ]:Ujw&µ7j"8xvkzIf?]S< p_t%puӕٳzFyc+iABW6Pn.x[xcpa pQJ,(BWHWl4]ckNWךQJкُ] ʠ:Fl mNά;5Vq`nn9ec\hR ~{LqHKt6 " ;E7nplXT " qAgOW@:B2 &= ] =NrD>J ifܠY)hiȂ腮Jqԕ 5bPPBWGIW .u?ի_/:K^^E7AY;Ϸ/=~AfҮ?;_m7$u+vYX}߭神?Or]n!_sXDM髀˷ޜh7w|#@\m䓆mIRF3Y*rF}* __gt˖85smfʯ= q POwzɈ u[g>U+e֝ؒCIhz#LMrdgTnlw!+ ߿]}vyջӫV_~ݏ˫~y=QQ*Rw5r ={frLq]5L6jTm s=fuN)41{BR*ƨTqr)K5\XQzU]*lGa[gD!~kn\}!TuXxF;t*dՂ6'Fh9R:'ZZj  9PlJ&cDbnь}'+S4)|n+Ztl0<ՑeאT|wkK8߭B]n 5J66Ԕ1"&Zj:`֎И7&{PtL\b/9%|N^{x͑^iuKz8Y+EGhvo)tw(4a*0B@.\ &ރ%v;٢`~B!(>:x!?Wz]FlTHگOE*MVTHk^yu 9YrX\3x_ا\ܝ7B B^JjHgՐJJ knbP9\VT^G[M1K$ѻ);R3BHHI?F6>}U.RHVհdxH,8dclsi=iQU D^b1Y\S>Em5Rܔ 3HTU.耽,))D#;*x{jڐ]j0I/3ޅ*m`eLO3rSbm>)h-i0mLۮ8{u%ʉs*j]vU7 Š˳±&6b+]"גul ]ˈfPu\s \/ *E0U#)ga=g%*  <ٻV8V*SOm` >vx X-VLjx*1+5+ veBC[+>G!*(5 7EP TB ΦVdd0U %%TR uWh%W e2aݚom0}5r/Ex&TD;-Z!3֜An Bl nKPI1,'T j;XrP`NB8)VZ h2ݙRPHq 92rVj ڳ<;KPAw/u$rFA굱ebUJ]VΤh^UDI)JuI1 fyƐ_Μ7@eDpPFl,@5E@H ,uWH:мGwU+czh2&m22뼍_qr%ktwh{1/UUQF:G$''cEͫ"DB&Y;L*o+\7:UA7D="[˺tڦB$ ƛAl#T~:ʨd &rt%C2 RE|XtLECNZ %t_8(Z R|$ Lȼ P>x!kn #.n6/xOC }Y'H@ɘukt<o 7%^,TGת>/A_EYX{ܶ0VN;__7y-]$di*YrS:#d6A@F4=|noy:EjCYPKtPK|̡gmG]I (ʠv!'X!nK)elJRJc[ A +P(vem!ڽjFd n-;{f/ sB"2(Y+c7~;Xd +I@ĤJL T "Q8TD쬮Z__¯j=M{}~s$YYv]  Bztqtj 4zR6`_ߝTiVGmCe5H<RVO ]j31{rv7ۭa#*3bO:V 2 AE!9)lz@'ڪ[޹qdi~Ŧ_d016ɗA&5$X9͋HE6!E6owUuNy<褄U.a+D=BA #$ :P#LOp'DʭЃ}Bﰫ&$ !<߂VXW,"\x>\A\pLoa?I^g \ة %1<*!d'ZC^Pb-2~ 1aQ=+NF1 VF372;R+o'rHMp|r\fR,J@0S D2a$@v-c|5:sPͭp^)R^xYk0}i3QQ`Ykȴ ,gftlY ]E   iw%[d{ 8j2C)xdXbJq' [K`i0[is1rIWVuKG0`,YY $ 6$Ȱۺ Œ^`$Xklt^ ]ꊄXp3OL0 vUꅅЯ&7Ne)`WRFth a 49eHUw?y ѲQ5mSC>ܲr7/6 CWeH,LSA 2\x4++b:}7X_Q.hqslY1bq9sa𡇠N/9cK|27ppA3?~+aۊ)u`},] C%Z5BP_CP>g=v7<?rVMrN]`HtMES XTt jNy/X7{Oen+EvԮ_]`.++}-vj' +|]UBݮPdW'hWpnj@`B_5~]Z!nWR2+eU"RFs]Otr&ɾ4+3M2;dI5|7F=E=PZ+@պ-{ݾQOT)) rm]Ev {l`~/\GZ]]Y`>UdW XsQ]\Ǫ9+cf'AP Rj0:M.j_Q*-%hWq/kDM=׼P&DѕwmUi]nW]ϘP\[VǛIrNjAK<0Xa_ ]|ޅlevCZ~(?Ҽ|.G˿?QϐCͦ^58p&ilk` 5Bo"/!n xz wƅ9|U˭`tϬھ~ha4n=j͋MMћ tf,hwj$`JVM 1fRx1t{sm隃2ah`۷nb8x  ],sQ.>VshOPL/9cC|27ee d۲d >{*E)5,ݞBze/s! ~&p|wg'n!5|zΓ~ s|7=I\P}6v.O(b;qd"pVDL*#*[-%dp"Q4/(мV)^R69F,>=c^+/Q!\[a"ʃ|Dڛp 5{Z83aZP>Ybtj`Mihr孏¡N.}y|McH;n<!\7 jS37쒔bm 渏Vdcc6[8/PT҉qiZ#Mffۛ4 J( Cbh$Tq^Y&0Ν܈qbfU3gAZ! &{)v(eU ^4)V;\JΘX$ ^xL#Y7~Y7Po?5P13°E+]4)8c{ F`WJz492G?7Bza}ZIkeq'mR\)D2mG jŸGuօo_lh2=sٕ6rLN/)eVLWÄsĘnij` 1/gFWKo,h}1Q>eKp u~2ltu帬zy~14DCֆ\3|֞ f_rZ%ku,9sh.fΈYy_&S'=䳫|&Z%3kya2rdR;c ĤS iznzl&M/o2;6B:;܁g;œWs\XS`ۤYm{[{\ $d]@&G*6qJ[}tf/)-G~d~jőn̻JٳGv=0M͓ku CƭP`zz9\JB5(&lZf<,2hOre|<="Y?Ҫ ?m7(}qVhB'u$/HȄR,Ke)O ࢔\m,C'ч$銐=ٺrp^uq*V 㝆`XG,:xAF!0VX9K1|X4CU ln#ir(Z-ۤ*:jW:yg90# (tH3B\ qgZw8 kH}0%f8&p&"iGkΓPfi$16 Xf5=UUU_ @86V^b6KX=>S hA!:$#d"dL<*P#8`EK7j? g?ҴS˱l7:7 D `cq4 7-d΍h_ASÑARt+^ȫ?%uQ hf2!Fu#rM y7jƿ,# Y܎`E24Qe񫢊TVóqI_Gy3:nvv5OP=)m]>_u] QqN(PhnGq(.UYFQ4JUM?G>Sc|hM\ÒI=)@~:z5{n(j٣[ G 0N8Yu\,8v{ȍ_82_o<LoUkCdE`P!ozR_ۏ/5$ڰzzd+'qrIFD{GԠw ?B9aGSrt#xFWGˣ8e{4Eݑu  I}Dko,D4|1Ab( '+YIGg@Q3"ƈ4QAS{AqU;KÖKaF#0 QGKPDZ`5:A0v6lޫx3IOOts ,O-'^J߽PJuRʳy1RB҂1D.(R5"b^A~a9툋Pj-xb"\GAD4ɈKN8*ă`gF1z"\0H(P.j7>JPɨ`pl'Fw#Շ0'׫r?(;c#ތ@; ٧].5>bupx:TH^: 㪁qQЃ Ҩ6ilpQ/9!)z$lߠh6#l--A{|DBPx- @h> sD ^C.37ๅHUɦ/Bz:j1>AJXaolQC4>g`VNS^;{f= ^9l} s}8dbJn Ĝ79Zw4f]ƦPDMT$HT)26ɞBdn*"y|+ m'?י'7uQX>[#^)ͥjbAHXd2mJs&e"7ps yL DL"$p1L%ʬb X#5(O @"g7rvL9t6^>Э{t޵ۜ4 .2>vWzy_i"ދߦ/_`D(8ۂYͳ(4?WB3(4eybE#d^7b(uvh0s޴`QHLR E)?s5T`fZYΨK Bp 6R/*r'Ofׁ*}[tNJH;`dTH\0Ř(ZINתCoEiBm㢍;5$I>%Pn. 1b4/H ɶ)If6ZSuHrGŲM9F/x Y:;k\N[L|=(~a:hs~~:ajr60:Vԧo&y%L"dX >|;1o ` +(9Bȑ'*0;&|A0x+j)Jo - ;J8ENjWNTy2׿~CC(S!܏Xrgq}(|EqX Ce*,eJ"Z^p9iThHF$|8,feN_Zs;knw ꟯Q %_G\w|h.W͍l I E\mBaӮq((,t-YP8c=,L&=a.X! +M>.z'tZКV87׀; լE'x-IJoj[uDgQ˸.ɑ]?5d2o(D{*†û-g9J Sh?Iٴ)iʏab*4!֭-on-}nxO1t6|jվ|IB);LPj;{/ϕ oc6SMLUw\m ]|2¹DD59Djg[9я9޼4[sۣ*8|T -gOkF4*CtT*sI]>Dwz W{?lzH+~ jEeJ}NAr(s83mi>gKJ"'^/J/sOg^ ]Hp`e.{]ϵvsG)aO:Ta?SJLJADs*3uU%J1>rdRqJS8a>4O}h<uIQqEcy >)ARW$XNٯB}xs>o=ܽ?ܡWrwz<>dhcu4qhcНd\}/QWP˧̠ǁ9_4gy,D驥`e+!M"E-<2 d > R6 4[ѧ;U3tڇTbM=l|Jo Ne$T*F5Q:B/3NB~@{ޫ"e-P\jM.R )՚1 ,]WY&yZw ޟVBk|کWoYIy2O3 t/d2mnSU_xQ|4y5^;T?.NE>*{|糫,9}t_OP/B,>\p~|èq&Fs9#eyE B-gBzC,4.ۋZ /Ԩx!/]Z7U6˭oHi&y9-_u)xoiֺv5(^7owA{hq6wt/[Lb6;_Օ$;G.S` jXZhWC<+Z?^qW#}|}C1GP:% ^)ZAo@ח|PA.t2*hir4$.ybOHNRPڠ[PI9MunbT21C3:.xpR!"e`Zn:$a9XvL&FΎF7A|vU f95ھ$U[[.|nji*u2}f]ftݮ»w=>jSIk*2u߶=_TGH) zܺlZX[hv~u>*o9V갖wy3-l;qjn?r獧qm=~t{Xљj4pqv5>An J|P"#s/^\/xC}wf%&JaR*"vDٚ"}rR8pȞ<g:;U3)t"YGQ ABsA:е6x:\G^ݓ&xk:a1$2Aϟ#!Pmߓζ-9j.]{9|8S2>gq1,n!6ֲ`5 ) T' .R+rH)%cXiˀs٫Jäb$E[QȤh 8[+YVz#g&ARGRH4#x٤=(&:>UQ'VZ{zP|dk8  eBY]NDDZF#A$3Laq1ÌNɲpU Tr1ʌ_2pC0Hi0.D88KJx ZFhG#!S(2G26>gy[:k}묭 :)X=Z~x+_JBxQgQk,zR>]fOnx)CU_A<>:n ޵,+_~nd"pĬzb711+&G'AQ%e H:n@ y2/J]!F^}&rTQ\L:xZY?UzEO>cye~>\H>~e.eoM|:>?./.]Fj}_%"d4K|Xu*"f,& qxGW9r{"_@U봼tYJ+YGp}!lT-[ &Nۦ1E{QN ~hKGrIAfX'dN}bPQ(r6WbgP۴\W~c -M|Owr~`]oG~J"Ywΐo0fΆ N;Fn`XMGIG+EZ ^6 [Uhr &T0& QS9:8u6nnt.V,ni>\>:$ɞuHWC=_wWT޾ e 8۪ TɁŋqY@뒣sJ*D]՜kmEڴ)NmP2׊L hm1Tح;;tUd53B 𐵒3o4xͶ;J`շ#^EE[jr6 $mT ]c\S±M[t܆F! h4rP~9;㳫j#JUq+e^ǂfLjڣ{xjzH(/5'߲aRJȝVƚM,VK.ȵFaj8\T"D`bKUfqq@yl9;lx,m~싈3"GD4ㅗ gg.*X+2{c& kū`r-b_O*r(ܸX+9 ١ gbP!pJ¤ :Nc21N֝Y>yQpqv}wL{fɾqF\qqӌ`-YPj;'X{GFق1}i)?%ǂfǾxh;!'q 27яH$ 2>{t&DT5O-<Ԝ # FʉtM?1~̡ߚCyڍ)Od2=v^r]Mʝho-mD > 벵`h:`R~6ۨu)uW!$$)XU(I>c.bӵjچd{vPk~qoͺK=Qs@[o f|xw۶+z^Yo:ԷrRߺ֨cozc}XV;3zT^+d!,kV,C El@JM$gփ13QkW8Y0b^;^=:^}/"cҺ8 tレ!4ֳֶ~=^V~TB̿L^0Xl7JfT*HR֦ٲUϑф蝊)6Bx]EQc1/:6=hӽ8ޑ|?=,x˸:[mJ-t3:Tە,{s+b85cPG~( :Y=|_BǦtdPH]wNbձ6Ke;;=їs%e0{*`WoI oCt !XEHOsyIN|: :<څΙ8ڑ¼P@;h]S޲-z\lx7uBp%6ڝ \ \2;\I/A%\ycV%py,cnÜc\~x79$oG(^3xG 8^6r7}.Awԙ(?G ,rs>Tͧ,\x/pbL[z~ς4.Xz`ҡk?] s925Alvq~}-Z‚-bs5NRzVd{<}r6.zʄS>쳳:(OʃwĭT}c}:p7߾O:G5&!:TƳYn!EuL%SGխm!Wu'jͽk5'*F9y$l*뫮=[Lc|4~](ee l'ev˥ɍ]] !1һ:͗nWo j;\\yևAZz>zmr;/g[.zfsL6\yceTy..‹C'^4A;nR4|/>":gncPX`i~8i&2DK8;li00`Pq!AU;RRǒϒy燞e=]tJ-E P+'NƵJ%k4c{KJZmlHCɪZay_QSJ6tԞS51:c9wbl'OQ蘘)Li >L 잦5v j-sg"8>0RHQc>*mГߗnj7Ǥnd1mk6?X%-؎-c:ث/=D:XҬLM09E[5PWrh?&Q捋}tqB4N<`Ey̔m$1P'3H-J[2x.6BRVg3 +pԀUU.}9^1x*+}СA_cѱip-A;gMA&ջol~vrQl?|v7y˦jNqW򕘥`RòLR.SI]jGiz`L]zK^6ڸgo|w:WJjVC8yUCJKXKXыX!'BkKѨ PNNWuBFQ `ɗ~;ONԑhKr'gAfX'dN}b;.l9+سemZC.aFb -M yj,l6xJ1hYO.Uk@[a dG(= EO_$ 'P}މ% g&!jJ_`!\R׻?r^HE-?OVymu{K0IQ<9ũvvC- (qK\3sЪC'NԪs^OgU^֜L?_׭9CBYA&V]ۣvBJ69x1. Yb]rTuNIH+2sQ5GbkF$nwC\+W2iPz#clG~J7,63B PXHKl{~nYZ 0\}n8bkU4?P&gKFŠ X55e+K[ 9iDAJ 64Q F&K]nW#=TJ{+9vĎWyb jw#jI!B!i08,L~ˆI)!wZRk6)>z{v[- ׶!gQXtkrQ2rS5QFڬBnَsRwـXMgD4#"i /je\Tv#+-V6ekMRWZžTPqWrCScBAᔄIt;tFn!sGgwǴm싋a7xR^`ֱsb؁wd-x ٗ*XBqx,xmv싇3~xxX$ߔ: 1-=qn #WI ~Xp+0&(f (,)N\T*q}#c]Gh; 3[ W8T+ƪj2BوC*f'K(TXĂɶ\W!'J ũs@O}לuZ\ :팊lCF՝'_f-jr^أ W%)֓$#nS)->ޭP<}t]HuA!vkۍH %fF"X{wX[!bS$mgmdEMݤd楪**+"DV 6)J SȒLa ITJAV%A^)ө59sEqԴ HQ_6۔cLp`;9BRc)d⢷y*޸Ƞ5)Q$P))zͧ(ne_P,]&$*\j,ac,((UXx9 tBϊI:h@rgjec 1V B+qZ :UPR ڣΙldT2kVzdc "mk_0*ݸ me-Ar-ZZGm7$7NSrj]]0\9v l 5:%lQhH8pS Nis'YVTrRtHvLe|D*AP2QuC+[=j2=wε5W&kXe_2m7z926Z[t1IlDmBz嬤9bP2?pbu‰B]o :՜`;m8uɓSլmCz9fi-C@C f_D^g͉bAʾ]WVjJ)cS T)5"UBUuכJߞ <\/Eםƹ[I1B66] bg/}O'uvlj0dDPy A%Gx ʗ*(zL]䑫s5hmumva(a`c:gͯ쪺HW (B=ن|*M]%E Yw"x[u_CGݍ d; AZN0̆G׼ 5uy_*-y kv69ˠpAZ7-4A0d]鐛,uit)`*qTb=[z V;C21yEa9 ړ*P)M".i"BJBnBO5 Ds/7I{Y֯+, f\ߗht1l-f!2}FfL6舉"ِ6<14A:E+``N1񨁉gAFR!z2[ 6bk'*^ebtW =vPKyJ+aB Tt)F#r}F{om@L@2'BXHY&hTC[lv&H_A 72Ja3%$ [`G,(btA,ٕS",cهK q˚Z*w"*)T YlyNPa9_'.57UX,P= )yg )Ԯ#R5'‘1fh+ , s0^{H+r`4._T1lt$ji}&^;0҄Hp *mߗM$0x=gC~>ҋ#fS=m~Ew fq F J*@M,J,J#hEPr.??J/Fw4)ZL:؆jҨv8YOΎǣϧ_&j҆a׏?~ bǨ>wȫ{><?LG'gz vwv1a}hb㝁jP {5NOG!6M2.4Ji(&OzaSp@A¦fЛ-<_gA7dhaGݬޫ7I3+Lg)j~?lr`q;.-"T!X5Vv]Zr8`3?hx5w󪁒V&%1[9jtPDقK9Hcu%q!eVw:Y,I=l^uV8~-:+AsL.X_BIJ%"ٜ; #0/E ׉ӫa3g}YL7vY\HuwvW^_ ZO֓*쇩F%Ϋ[ghv60~=|p*FVѺqF6&ʈ}4`[zzzSll6›hBcl%2nQX[ˏ2" < CTbq=$baY$M(1-22SHkBhLMdt q;6_ShwKv?1woS͜w m[o`y { ZWz^0 fWp߿&w~9a~@Ou7_MZJ )8}]YXzQ6_6$ ϿEfϖ]ZBPgC}YqA^ ˿-Mx<ﵾa٭~u_/څŞWmjw4o(9;w}Q\IJJ7cvY]>x>bm}rkW ?z% M5N:J.b64!hY8.B@RF7SV>WfF.#Qs[_SV9(JdTR.9phd5>=:1Yo%XU1 k.8tJj[5JC{ΈNzrTF۵twhmףy7'g*aTJ#e|J1/8Hօd9̪$|D!-f,wTYY`gBx ڰ8ӻA"]9 2 )PDT.YP[:/WXl9X'蚑ƾ1B1١!+IulgsK;k'`f-Z+g&c( ~rTB^BDdXCj"4 G} ?/m,&Լ^GYn_U-JU*gi%0@ΠV<;S/p⤇oΠ-Ktt5c@뤂IoT,9V%ub:@HcwYB,ZAT{lVq/)>DCvE¦*=8]bŹGP2(~`\({ DW^ۿдӼxɡMt曶gU<>CZҲY}>I>~ƗM!ol*Ť{F~YQRi~lfupu>J󴎴j\k{k,mV.23zT\OZw/z6*yf)ATC5 WWM[C v IN܏x1)oZ9V-ڴ:-7 %}%i#p~'@?Oats20$Y[vF02vH|"m W~*\yRyojoXjl4Y _kMtIE<{5UK}m_>ol5q|,+iGըQ-p9_K;\=o:poT}Y*3?2c)_;puС_O* HXcqpTq<*9x*=7õcvzs"@m^$0sWOӤ:p$):02O+վ] B \Ui:tR*+ x*qWUZXJ)TWoT#+X)8~UYpU5Ji{-J@qL*0Uc+V箪$\ircMo,~\ mgA-=5\NfSw_#B"'ց%zv{iqD0]&u40:q4S UU)ݛinhDԏSZ~QU,5qՈU`\jng7fvy=}E ѿק,T2j_| oAM|5 UFe e٠ q)h)](I"# c?Y`-ьU\u4gv^ug~|㧕\U$hUZ:$$pʩ|.\ZOv GWUZuI*%՛+"%nj49JTE_|]mT6pNm[~rg,˧ѠL\L oi$ E9\^wK4~zNB/#߶r3?O&?rmˆ.1ۥ٬ hγv3y>bQE眱9B>4DIơM$ddJ1 YmB }!!+߽h+5)x . ;e/z-w .dlVn +9 '-9@>)vu<E+9Zb?- (08d1A Zs&+'(+dd5ml2u}\[]o\hiOZ^!w&y|ir :-MElQ_1%#\R)b,$lƣOݼ,(Xv5t7 j#Ae >ϫgFʍNW=*Òm]?Ǔ-$UMLSR?ٝ(w[c~3b|8͑Gn9b[qقd ŽLek?jԧNj^v@)L HM(.7`(XE}`+!gn%?Yɿ,|TL.G$t0L ]zB(d|tNdSCnl4;s *$$ f%TIK<6Hӱؙ8;.zȣg֭[\6|C 12(eK4FJ/]6$8+, z! SC(,ɺ3DuEC+e/#ČJs&.vl泺d4_6|۵7adgz׆wo;!g7ܖ o{@X͝ur֮'᫶rq;z2]·|lPQ u"Ժؚ>$w@Kњ)F)"qTBN{[H]t,Gao+\)mN}a;gʿ|ΰ3 [,\ܯ—•DmoI{iOoq6n?M>Χo*N(lTV', HFK.J%wE&"22P5}T5{[4ڊvPQ.O!}``a[vbEvb!^;՘cA.8!#K,Ijق*(ATBNRH53gsc?{hE\(si^4x3:8"uh kRij,J?x3_U2uF|gvx_Cs!-i;n赴ZjlDH5Қ=++3pgޔJ@d@|CEF :=PܓQWz-qpWS;`^I^! %Tb4 $$Oj$WhhD Q[P{I9KD Ak2^*GYȹgtvM{f+{hVJ&a+lȸ#p~׻/q={tx>gg;^lt6S':0If,ci~2udhH$-^yw̱)_u~aȷey-.6v&Vf%ʛYYX6Uo;zmzX;3͌r2M˰dt9Q}9͋62|Y{m:*9c^M𬆔uKw,SS!VvD GiB"^\cӳ_j%_7//XubhaNFuFs$ݽvBSﴖ:|߄釣ܛT#,;ŋ=F|/WnFPUfew5UKū-_a)* 3݊aFf&RSgOntYxsqF+|Ov^o<FX밭yGHsbVm[uxxoBf8υ ժؼX!dcѰA)a,҈=7M+= jkPzP43@Š\ 12tHc:  RC`kg$bjU 8lq-JA;6o}IfF.SSm 0O 4)ø Y "1&ʶR1 5bndeO` _v`[(0-> m9[3C`isRjkgvBc!nkOc՞, 4bl2PlZX %s/}8ծ"dR"1mdiA Ee%+Qݖ?y'&5` Ge&k]M .t޿鮛͝G7lwN8'BGZ:14:Ӻ5ߦ P γڔiȋ8eXsRqeT'S~7зMؽV@{=:Lx%]E[U5B ,("9D]9-"BR:EZ5RNXUb+8cl bQhw;/twEg܍YC74tS݂ m cC;Do3"ϦD%s"Q.L% 7tvNPհ5)3U޼76ΨT$4eј"45*=p(X7ދG+}[æ.%z)2쵂f_ۘs$ Nees=$` ֓"`H404]uhWXtno^9g~<_ݞy֝mߛxmF{^X-8lRI*9 :ICdE4A4BN2b)z?zR8AgJZQ 6! #2^YiCDE6SzR4?KvMGݜ||u<ͳG>ͪfuְVZR@' D')8 QiD'֘sa+CӍ9XJ٩zi1J kI&_d#M\-r"y:<SV?j3kg/lEj!`Y4mR )\11:dX9H}gR=VXC2{_p`.&ny`]^BW^ | ~yqotsR-Ube)I?t8UJ 2XCª5p5b#N3C\.)B,PJQe n9d7<`c%voUǮ*ȶCSʌV0V8euQ-RVGlh eoR.t9]#m3;ENNF8RȔ[z%f}th:)9~ܬr˪7PPaCSlu2vTǿ>UϏU'?;DU/yv:S|wg?1O8bH#HY;/&Vvx`X5a2[Wڱ@Qw&xLfpdSTxJn_]mx䊃^""1D\VnmJG8, ^w{Z:$bvQiP2;S:XvCʒ"'=__OG]lO z'5(-|pTBdr-eIR"YGY{SQJ+M5, :Qzc9mz3ثCJ_^t3^:ąU}.Àarڨx 2^kÃf1_&W޾ݧO5+B,q[Xcк:TFl b4xUsB䧎g MH4F%c 03o{WG35~OzZBl.-X4-8c6ń#XF+ҷLAqt8zO@}4#RBv ^J+PZJlE/Egz9_!rȎԗꛀ}Hl#8@rr%vjHz?gxDrlJj8aOuw}u*p1ʼ/@2Mr$V97zz*+"OZ oND+h$+NZ9h{H$G9x,ʏ? :qȾЅѡ_4WFT#['0 n=qHHM)ل3dd;ߢ]Zg89͒`Ş1sL{*t']6v242oQCAHT{sBC (\ePPN(wt[J"M|: }66L!tt <-F@#CHFQ8 hY 6H oHN;kdD"F%V[\.it*Ř[UZ~T p<giAl:b3n)j3r][dYG32Oz&Ws郞5# haU߇]9ʽSJ EmZ(;^fsrrZc 'B)QDKGC2KG{B8AT%͝=";0,j&mR$vCo3 RHFթ&a5ɜ>[L.FNGGW#'h6t?d+ɧe${{tk ~}wԽmjօhwO1=??fw =?=cn ;kWLn]Q^"O$^yQ8?]5OݭD|PxڶݶF^=\&b֓!6vw9wyԶe:4;ZV#ozKŒ5_<5csѽ˻(]OPo.jbr]J[y|<8ɯ~|'T40Z@'?jl_E-7UޠB < Օ J'8e527?n{-+Lm6hS*NelXI>C*h2Xjrs~t;k3F7drGΝ)VBS9L`s*=biD/aUHn;yZ.oxW]ua{eرj\ٵqv+lUb Zhf=ݿ 6Oj_]ɹXC)ՔGu7(\[{9 'cQͬiص#;:bӗgO6N9|aW慩W O7owoR 2oܿk:lixcG ݡΧyFa[ï׃/W+N+YKʈa]Itrk5Xrd68T :=MJ#ix.+ŤdE;u8^g??$wbߵ[ NJxC}iv5sā3(9m ߳ߩ.p4[u- eLH2A&: cg/OY!s[uD*/Qhޯ$f? ;?:Ѫ+ڡ.BT]WV(F%B Q!R\5p)ǂH=Ig]^b<APX'4 l<\!=}OhR="&_/ZѦv 򗵴9+7rtts=T ok _y4jqsM6FDk! _?o\=DI7}iy;B4i&jW|G$qbVYHѯs|`kFyq>-jЛn l!@}b\9.Z|/ޑ"<8R)"wx(j𫏷o'v6/ֿGJ?qJHt0_#ak<"7.xy>\D:Ar;ZaϮ_=iao@H]D+Ǚ$EE#B_/Jȧ." h1RAo!#(&ʄ!P?.0⋖x >P1@C! 5A)&rT@ dZ_K1rvxA6iv2C8AQ۱0 Z wySiwk#RyAgtԌH%B,4Oy U2[4_vgj>`l~dGnj^hÍ6Ι)he!E"]Y +TqIRQ k2Jxޣ%@q(\uD,FNlڡ\Vsp+\\6V!R|eMBiVS$#"s%&:&cSŸc_y( COA->>t3r7,oҸɁYe 4ߎyЗkq#bܐ٤QYg~* \)>EBZ K;/*Q-8J^[yYT{pB% *0AT$B+etӬArN3wr9DNN^lN=ۜϿWfH%p$DfhIB_ 5!^ E^ċ$,P 4Iol2 bԚq9  Z-l#C}4TݏShédW#p_u2fJn=4&457$,mn~E96ļx@%͍gR`2pyԈLJI)y FhK0˹("B31Rp׀;m<8v(Cd(4iΆ`H(*FN+ U)"LRJIVTFFZzz'>{w /oq(~P-D& B8,8V "}$Hvd=*:4SNq1ʹJ&bS1Pe;tZT1\0^8Ɛ-"#pZB=-c@iY)'(kjHJ Cz&Sn~-FƠ&m+G183E/\6ݙv viԗr8Ff?T߹9wQ}zφ(IC_6G-G(pZHدwmIUq8${m&#k)3H؁k_쪇C9E5E_UWWoMLkBY !Lm Y:i͟ŕ~t5Bwͻ8f.Ϗ7CFΪjjZwW?7?G7Ö0LMu>ox@l۟ƶN,Z@") 7(Kdp6QZFג`mJ;HUR!>Äza- `  6^H& ;y15'!jڥR(eU'Hƫ=smZYƓo xrX39U\$#څh 9P#AMh5D,=&l Q~C0O e[KA؂]pe`(hmĹkPG')̙M.ou(Ǜ53g{wmlm3a0p& z$y. D|(s as[#!6ݦ)窳fWFFZ^|=埫sw{ٮ߁p`0yf~ɶCZGBZ< iNG@YM|!ϳK|?x *aޑKC2M-0F 9Z@yIE !dA#GDL UҡE Ӯ} |/ZWb&EP' OZ0.$ ')z QGc!$ǽKј~drJ<o}Zdg +[+zK} sfY8iO $Q;F]"ՒJxj{"TFۋ0~ '*M+b,ٮG]T=6IDIF-UIr0*O-ɡ~@C%9̘9ܪ"!Hh0UR#!.f%33eEG?I6W/.L1`XG"Xr)vq'`2eI ;29/ H͖\64'vՖf^ukp-YM̧4i'ܺ K[ 6k;c8!Y~!9  N*PQ1rdHR&)U"Ԉ-55bZj܋0Q*/OLHЌ,4`NAJc}=g 8-myzzA7<]P 395eZ Z:$Φq)V1ECN1X)#pWZ ,>X͉R-!rEVyfwE.NlpMKq^ͯz H޿_T]˶ά;n,jnbXH"t8ցyB5'.u8OxLĎ1cY::O3ޅNN|mG8F w:2b*EpxL\b:%SXO% $;Sj0 ŝt BqYs<{Fk cUV" 10Ȼ\0LL{\)餾ԟ`ZXסE?J #Aew ?Bx4 8#>ξr58Zk'QX/@RRP@)E-{LVoKζ=%.5Bg(!Y%UN˘Kj' ID⅃ ׉SN6ۼ2zkW$LU9FIUhÄUwF*tl`{H|̪^8;߿mtMz2ZSַ[@S .R|Mťn!)#Z:+2umfW651??e폵~z4U.`6N{yc~'t~ø\EPtτyP㺓~vZM+%1 N{~1v\qW䧙~ɃhcX ?~Gewq{]TozH}PGÐN"FI$<껫I=2/+;-~zW U~ɗ7ol~偟O&tV-wMn]EG .GV h,Lo<]{L]0\ Cܤ Lz-|O9%3f# mNڍ-:::L{Lűl4äjp}ZJ 5v{_4\L\f9?.>ݖr1*I&NɃwMB-{8\G .C󽙤 ^0?݁Ih<;9Px,zC>I 0`-4<0IQO P~^3hvަǬs6Ӻ؏[n67x0Ư3%_(;#F,>JipP͵Ҕ& *DG1n хHlTzKXZ(@@~{؟};% 3I D@2k8hD`5](́>]f~+-k^XvuOX/u٫쟞 .2铡@9e">9r³(։E,+K9)PC\A-?sIo,LB ýKpDSGC<4 #R&jaqd8( E+{Β%Cі= ߏwZZVxES"(ṗ` cp1B\RM'm<"1(>Ëc}O>7) J'qbhlpĄV%8 bt^xotQN<` HG1W:taD'4OΛ-_vMPd˥'5'.H&HΖ&y0׹^}{hR//_Kr/^W ~ ~5?}:Nl(r1O8q2uXjGX 32w\Ek9g5rG/#6^,/i;@]dE[k=8IB%ms& /E'8iի^WE5לy=7Xj\kfWt|{pWWI?X{r1Lep~V7Vaf]:A/OϏxBͳ[38Y*^b=HGT!D1 J=^tr-I1Ľ5LXCO3#Y,L$027\b! dT(ElRXNQK9sxՎ1eTeHxi#9;[jSZRظykY^ hިc m$h#p\52Z][oG+?mp`4rZBj,sX0`TǫpK:IkC Qyz_-8׋IIo7i(i-ͱv~5[NW*~R]tF?PP4YPrٝ+<]5 ʥbbK57`LJbSRɏioZ0kwR5rMh=3͌lWiaIu9){{K4-&/r'_rxc8/6"?BkXG ou*_ڽqFB"8M)a,XA`QsX.2`Z۷vl;"t$ 32\ $hb摐&:^$xbBNU>ܙN|'OI9}Ӎ.EOw}?9yzE(s>* f5VXZEr-饕Π4R8* u}'{v%@[i(yt.AY] `u6I82/ -W={NeƛޙU)G3 5jZZPbl2Z!~YDHJ` O1G K_f˦ !M3 3q@gi,NC̙̄eI/XJu!Q8H.$֫R7wɳ%eBaa^h;' 9HG ZHĤ[*MGF2[Zӄb U_lYS^7 nP)ȡ%t6fhZHI+ lfL~Q#N'SIHLgjA]+k^,Ooo+ O>@J=̺')q>MJ-gࢢ;!l Z)%)pahHk_O].G*t,pϢ^Xc]u&H: @Sz[Z҃ad̾3ޓ{@w/=y%#x=>*soЍ#ɗeYP j8Y;>JK}?b^I;!ݓU8Gm80KxRT"3PjęY8?' "D%!(a*eDŽYFrJ̅3HX+E[!v&ΞP? >ڙVl~i7'[ӰRw]k]Թƴk޷z*|ۣ :1A^fh(K(k&R)/yDq=xt&_1!$g] Z!t0ɨr`]L-N|:]m>;Zzh򞘢 *+"K&\uY^W"G/(ڢ07^ zx29,$rx?5nοW.dȤ13Y >Gs|pvYv47"=nxpe+X+q4k"SH]ۻ㮬s\#rWE`s57`LJbSRɏicɪiwRh,zX3o7g$58:ŠrR6iZV/M94O佫vlIA-f#տͶ4lElj7Z 5;exjfտG#~TNRXn(kx-JR.ʌj½R=L1.) \,҅_ KZ.io0<,K5^,%6+gcJH*MlN67)F2Uqa6KE3֯vUv4zPE=F ^oN;@:t H%Wֹ~ӕMekvhphGt3 ʒ2`FGѨqynV1J9vOu{xZoPk} /3@KԄbt` t] [`vݓRO9гs>)$r<1Ν<т q--4-]^0[+M&𝂚=$+{s%&Cdw.QƠ! zytH#e9 XYӡ0M&Ұ{iڶ#wO2_.rg$-¢[jWѣc^ 9!9bܐˣi7;OEo,M^ }RR5Zj#yNƬ8{ FF xFYjD %BxSQGc U3DSt(BJ]moG+\rE@p|f, _m2y~BQ3驪~)uIY&AUI !>zj1!|{&<谘8!zꖇ>"xjA0ÿ-p^?5_wfQy| $Ĵ{B|Gqg&໗<3eDw?)WJZ[!G{=86އ1m)6&$ߓQ_zyCwx9=Qk gޥ0|ӿOQ͆×.>.l}|ZF[ZsX.h9qRzâL,T8H@XV$a8-)Kk_b42b0RLqr$ʚS) Βԅ#G?xs?|x*vw`dkV}&WU>0Gpǣ$*Sj&pfȐ+Q02 x-@Y' zz]y^瑞ςf.ʨP;4LmjR& Ie ,cMSdyw 8Ns᭞XcdF\rBȵMXȝh<+ DJ['AH:X}1?Gdކ=6/(2%Խ@Qnw CBx6ި/܍?<9`];:`M<`mϸxQ'ӋLn}^?߭d3Fx̆4~?YΜqtQgȢlso$S. T1X%VoeV.{}J#z+4iV+h}pvp7!,F5c߯諟(&?տj2n,]>N'կO+F7+@Ј4a&MMfd~9˂|N`TiY#u+ 5gtوUdZ"^>^,'(ѫ)mM?s1u-yJߦdN'u |^|kRؼ9\lݨ"_!~7"w"?I<zO>WM/i- oU^o89i7EjTZ)220swY6\M"QVϠoM;޴S#&ʏ'[Wv2E|Gg&gm51R-Bu֝++~ĊưbsM;r̉t4vFSE#J9-= c`TZ:Ke )ThWxHT+- TQ{. T2(3ӖZ.$$RZ'+:At ۇ! z}184kRͭzg=i tض`Ah&3n[-J4cD.8ȵu%gяt8hvtaQej,83pG#si gaxp\oc6Rc>x G* ԀUzetGᄌ>pJF * Cb쓈_c axpY;]=9 \bSw^}N ARz@msCCpUpVX\gfoi8¨Ad+hUGQ\:7iNF2v $F5oS gNT2L$cƧSZ߶֩BwMPU#rwPsO=_/*R=uގ~u~V:"aO,Ty`P1JP*> Xo) z{t^GTy $5Sp!'VDm)O6BAEfHZt)7Z A!D*ya>xKeL3 c9'ҥJ;>AJR$/L+PLL I^U^&pv1fQixY@*)&uܣ#*fUG+ğNC`D$J@(5Qx\GE#\s$9g4vgJDoC_` W!$ &mgO; 0.j3RP)"Yƫ)csl#>Yk(uJP!>6@Iz):=3?U!Y&}Iicr$2nr8]J#:gL!xh04FJT2qqG[L'EJ.{ލ:ZrNM$0*%{%#y"$h-=HY0xC(&VnE2T~_19taӭa<M@,+ZGpгf85c2(ШR b8l:]nyk__a-bnj=1EFn*"g,)zwRs;TTy3OyZK +㢩I0R!r:밓;qЈ=9ǘN4:i4&>4@H^RACsQ%̫BN L #18-(K$FTB*Ɲ3rl|K't܇[[7^n mNnW'uw65 ivG,ӋEgn&Vњ5Ye 񃙿B1OHMmmDpn]5 KGx;wxwjۆN7K0w=ܮ9ﯻ0.Hcå 踍]nVb~孷lg.U{ۦ~k$~x7Oz:T_܆bs_歗}m22L`~SGV?sI' @nk7e¡E6~F)s@=Ƙ044#}1qv{cZH;0Ix[+Lgsިl}͓=S?Vm= Q崽5a/0"*H>@?P(zʸO'QFP\BQĤdCH0Svt1xtT&2a8qrolg<>T&Q[kDEQe)qL! AK9K! 0W&Rل??UQ/V{#Sh>3&>|ECBչ>0ITrI.P3oεLKc(oɹdD#[ӀKʃVj~= ec+1T `|V0i@ "Q%%}pFX0T* h#\ fNJ^}ʼne1hX|@8xnɄ>T̵"Z.KڷQcx}X.I 0'27u:6.>GzrklF`~.6VW9^dOqNoU9n ag4X =h_[1 B2*Z驎7a>&[$EM4EE0ɩ fU=2N=LF lO# FFۿAv}ͷea24Dʲٴpx[6B;>ɡŬƾot-,Q:gyKTp|+hSSptTtf{4Hu룜nZ]> @)HRpٕ!T xnr)k3}M}PRf6ת/ԘriCxEoREΗF9u?TbW*glXTi[uz\4'VuRo(;򮬷AwUkkJ _ȒL?=+};7oaCϜ':-o>/,8^kX9~HƂQz7<:o509fr1پדRs!fS)):VlYb5x,ӄr<> !I$PeP8xD#J1AuP15XWl|P'ʥerMH_[J!WNcɒK$u08[ysS^~cj GʏܷJ^~dL)†K{IIʏ|ꊡLЖ6רO`V/JjH[5(9\VJKɈ: jROғVHk}-\0o96%q08[ f ƅIƾ\rf.|..c7,{|eWyUο`.8;ԯr E[mQ }@bc@\a+PcMJ ;mٱ>x 7팄#sp-fY qNe k&3k$)f A".lw&o6S+sanu>w[`2V/:S5ezHQ3KCf |to7";1}fDqfIeZTS|$%E )r̭LҪqiX 5γ2Ah:ϤpuՁ*9')z)?-N:Iy6K:}yEyqIDk]-8جU@Z %#&jBy9xq,|8tˇv`>(lR.J||0iuxJWg6kVZ BlbnmX)ܛNCV-*p[p6al O0|41Q;?hEcoL,F R KUn ,7f5BOQO(q?Aݟg*?g?0HEu7G̵YW{>f'>4E1oRN9= YUհ_׳/ KƔe{mz}eܶ2/h; )X r 2 mH(ؘa-b+)(e9~I&.#wqjh3hg@.e .㕔TşJUCT,(-8E_܊dd7S/HwwMD횋+B=AZK_pˉZZlgK0% T 2s.hY~.tf>y2U U<p ]uNWLWGHWր/L,o'](/W'Y&M *0c_Yze?ͥE{S纺.@N +W_ Bܽ tLn24Z M+ZG3R(4}4MkVӳ%/尟\.[񼯂vJIT\=66Ԟ)"#?]{dxRvohƼ16J3V:#k^/Į>΍zͯ'WN?|VTZ\ZO)Wۿ/x 9E~LYoy9' 0O^Q&=FΪ̪|vdkqHÄ<oLƣpi2MGkGtnhУf])`kd ]uNWLWCW ѕa:zeUG{OLWGHW0!UL:\ƩUG;SBOCWlz6/ n#\:졩Zsxٞnj6J;UkT誣=uJv.y+$ U,ӱn0a*tђ:J71ҕzVlDW ;3pdAE+hNW%Y+a:UGvDf:BwX`t*/ C3릀]M`qzCJ}Czo{V wwJjOު\MkN'ۑZwY5yTvvb9>R**J$_P~c{t=(!'5z}0k~Ι ~=C׆sZO?C ^E^^O,v㟯FU 5"KJJP0KT!lseOwu/WoZʇ]IWh/+_\6^w7V yEhҒoRx!% Xb CAõRbbJAb&2RǢ16 p󦸘}+;M Ii<xnr)k3Es *,IX%k#kF}_۠5֪b$ Zu00W@mF (O6ծ xWAyRd\,5]Y0PѨ 7_.%Ay;(T4|Sh|V*&bV,sQXnՊ U-:j[(xZ6o5X VȩVʪuɕޕ0QIT}6j8 &KuP6v-6T9[L%e2U XO*{FzYa)-%TZјUd4KE-(mEEyX^% ~HMIyK2TU\Q%X+V l8+2 l*.Pu+9+`D@=(Nz"4JjiILgPQXu2b-A.,%;FPZ{﹤Dў<˼,[MvTխs- VeT}\E[PB]Q[=ಬR uWh%Wk@ e º7 Da( VCH(PED&TDG#hFg-JC(ʨ[sV ,,xu0wD `L `iƗ)B`@YLu>J@Z--q3؈̤( pqT AP{ TtgB@Q"Ł.0͑EUP=kyGD)J6 e 85Wڮ+a/uYhQUDI)be+% Lcr^5 5$DjeDPZl,@E@H v/UTy V"5_Ƞ aPnir3RTil4ICQŘ@QE!i$% !pB6}fC|mgPfwUhYsH7t#P nU]-걬.0胄Cka&a-A7 /ፍPMJ)h+1$ B+X(ITxmBV5 e4/! OݗUd 2inuG-#2p UBN5~d}%TU;+Q%TB;뒄 Xut kR([|Pm!ڣjF,F,-;="hyp ( g@ @$B2Б5;5v tJ35)JL T "Q8TDYUQõ`QypTBHcF8&gوQ [%VmZy4ݞ+i=liVԀʬ$r(m@V3JK_U{9z 2俣 %0lӨNJOAK/lmRjh\csm.]C,ĎՃA6q,!"c2|jFRl6MBL'KhG-_-M6Mxo1\6_]]lBMytb•VR*!#V[}.Œ-w/^v9`EZo\ߣ'm7{Viv6:_M{?I#zhsNe~|W=9ֹ'J׍hbzO%+,5Ƹ>qFga\+qZ/!^*}tE(w2Еj]`ﻡ++Ō?u"9ҕqBĞ膮}_*q duuteq:~S7Gx|CN'YZ\/+nZx.*3vj |?.VV si i ܞvݣH[] p,Ӽwu9& fu~c⭟z\js7C_>ije}|")ZwZ}ĕZGxl/>Γ Rd`bSyBmaR!Uc}>[BvfЪ H gC`c+kc/tEh:u"12]!]yޢnF ]P*9URAtCW^Ɠ+43]!]Eo".BWN^NW@ `Еȡ;_~AOap~Y: yP:wZt+tܡ>:+z5"N:]JtutQtDWL?tEp酮 {tE(d:C*:+FwCWCOڙ~2!HS1vEp]7kW6PjeΐQS؄;BDafSPӷru] 媕\[5r{ ZS&sBwq:&E7TǁPZ1'鎘u\C fcbl߹o?%dة8d?!d1nc3hI38d0*Qٷi\~)]_K,=T/blݵr*oضh%uޚ@t'"=C7JzZ&`#)rS/B 9ANu'Ϸz6S+tA0]!]ꉮ7 ݨ+ %/%] hD7tE@tœ*wtE( oЕȡ7BУt^Kap Bx: M@WC/EPFuDW醮 (LWgHWJJ+MGtE) ]Z{P:s+-=+C7tEp셮Kݫ RX3+#j G]\{+BkԩΑڝ&NJq[@s@'Q Q#* 4'/* gQy,SCrXcl)#=E9b;z!0<>RZQ%]U\zV:9ԼN~v3".xktwSVۼAiiAK7gm9*v;^gq{*W__>3OG2O@f4CNY b&~~oxCy`2]z ͛g.~aj7?>Dr)L- {|p ?u?ؠ2.N?{VG ]r}ǷyokmS}^^<:Y eDJbE5cetk%I)j}~ &wѲĻej{{(7[s!URM5Pm*:Vhmq4ҍflE"DUʨrWyOӸl_?O;!sڞz0BrHWCdO]z7#???W?nP]޴ 7t˴^rolDdެv&.}}+}|^'ݗ !OCxn/ћVN?]M\"JyC.sA ~~|j|`I|1ǟ4ۈɢm}j,i6#e}Ww(|ÇhV. EtJWyt%Zn( 9K\$wϥX҅V޶tE(\AvqZv_k[DpRݵs҈,0(S%FT!K7\! ~v;~5Aĺ.n0.!:4WRb"Nf#3Ǫc{pd}H+D[d2Y'P5(*ɯU-?{׶FdES6. ѯ4oOikLjQ ^dJ/ATWGtM(TU95TUT j?&W]-y ՘&$p Ƴ6ћ&U#)Q[ؙ!v~ 9ATΉ b,~5o{q4MQУ>җ69數pϽzpA=]#VmQ:Mȋ`v cŤk_Wv11T-$AWȹ(k!㊷IJ=p{< lv-w_ٳ 󡍎NNK-[m>Y}#RdM|uog‹F?{݅Mж-b!*wQY^Ws }cA9f-% 9'>]܀pc eTQZrzb'RGrEUXSrc(>f.E)ty KEiu坡TwlPaFk,L7SI9G PaEeVxob!6E}w݆sr\b[Pӻo7m\L|=MM|ۿЎ=awK׷jj>ݢOyj[껃shYT膛Ԉ[pN!no⏴[azw8=>٭d&y\t(-i9nw=zx zhA|͋osW{+O }uq~[i3Mm_:RV&:dRTzu~y*x;?76BC[~F|>?4Ҽmn>^nM3EwP bRL$NF=?;R NF+|WnBfdrY؅oG}Jtk:M冧fFnja@ 4@T4d:hBjU̓E$;%IFK,$#7ǬU2m%]i3h( '-y\$ψQ# _P^۵ߏI0qE&A<{cARQ|fmIcL5B0 ){8j(\Lq,1ZoT)TȺ훴z.heU^NXn ŀ+Yߘ>LNvqi~>6nvEP=)[,f(sZ:|d(@B_.YS)*e]c`"YӉ$B b|hy1y`9B(23F-0)&YrQƅS%C fr„Y.[=UbI+5ST;m8nIr>j r("'੏Fgh ,%ۢc-dS.am u}bE3R sȩhj6Mh#+)48%̊Cu~0U=/ qFYՎWFxZU:c@AFo {(*ZHF_1*]Ah3ZZ~Ϲ2kDEmlbz֜59(xH(_9n{j?> އFW_>^}.&9v_K[SZ$ gum<׸_q/tosJQ"0XNm+$K ZvG5)cJqm'C ϥ<iX/,ٗ -=׫ɬ>]}XWwҮ͏_.F]]=<o^tu% >gH$X=ͬm,(F iZ}֏hֺO7` pZ*ȢOaλ"Q;m"9GE2fE2:$j&"ɛ#kL5FP2V$;YV+G.k3mR7 ?z:h\LſLf[쬼"[):_x ic1Q$c6)ǠrVKc5.ըXj?ޙ7@/z/SL]/ sضkE|02^-6h.C*!2Jqx_Qd!?`p(-pep]6$Wx qPnӐ=8@WfjF:\b$|g'vVL1>hǷ+YQC:AfO2g>F/rL`ɒ^y8'ܯrJA.H 4Jm-ʙ/)n~{$n3l8WфM/44ěOS DM=HyZaHAL}OYDRQODKaG?;$a%iKD*ul.٬LyD@~%Wz?&_>wCn˶Άbw7-#o.-k|VL e'Uu}V2)^ekc,tY9n>=[gD<.+̦SKѐZQઅtfT7B1Z#:"O z\з=Rl?^==>ߞs뷯׫EnvsS*M6m3bvo?4U9b? 8$;i|_ҪٖLT^_#aI^t\$R-Ƀ W4Ґl ܵ淫9Q~/Ygz8_!SH]}`f9yXF_˔(9q~V},RCsH|3]uUuU5WX 9lEc쯩;Hp]C kċYyہ!t.fj: c~xt%wW:6]x`Hq~ՄyxAU_u/ פ)o7bRLpQ]k\;N #1El؜Py) ,$lzg$JYVs-W9 >)K}1L!e ]Ö W?YI,( \% CVg4,0N&rٴV}w_A[<;ĽYI]](x{[h\M{w}'MGV̟ b}4#NŨ#J4Zg1j싧8@jу;b7vN =*(+cE^QeʘѢ2*h%I0ZZz@ik ;x^. Ռ &񊉙$  gk%DHqBʜM&ʜeβ}fή{ e 94[Y+ޛB1$ و\F!s&@Fh+MI/!7T'z1MDU>׳N"J :,EQNL>(.?hC41$I=S1ěCю$d_O1 k]%M{ɺ~ҭr!QV SnܑrqVI5a5ߝe_ (R`R715ZH"lS8*h)7|_,'ДTo!=)I&]Vd-cdi0GݷU [88>gR$8Rlx JlS/d!9ZtVi_py8ӚMbu'}syeZ>?SX :AB@ڱi  xQIgs7d^~EG:lCtt!h 6}w!fL>5pvVjHeJzvwZ!\*XB. -|d5*7 fʅbx=4Q~'iZKӳC l#:&IhS^'aM6T5RT 52)RO/XC)ٶ4uw F+]Cb T&6 ,-li"yvLv"{ m݊*.kG#:-:Wyї|?-%efwNJ@[?mF)ZN[CLv[+=Wnk6S>;PBXHjdoQGgհ'@ yv+.U T (í"Ny~ WƧ*Uo}TWMxRU֟qU'uk )Bcjd\~V*~˥Nj7vņ_cw|om۸;0͌76?D]Z]gO^k㞯8lTyِ> 3~o:ּɳ.ܾl*Q$cDZ3AV1)ZLVׄ0|zEa'8N> O2ηZBցht"()^Y%ÔY05jjmeaG#]SVFPQƺw hm2fe-AY^ؾdt,N9j<IazfK]Dш\l/h-{zC& B$=fEAE/TCXX@B&hXUCA۳gW:,d0j"JH,"gQ"+b肮y)Kg|IY,w9֓`O0GPafŃx!QD}6EjfYYmyMZ'lГ׏Wτmo ooM@Fq>C{T)0wY@cF4pKV7(`3M3ޞ k>~82]9.]Vn(7Hutew+;վS֡& ڱUEKftJ:Ž*`gGCW.ɱ%UEy,DW'CW)cqDtŀQhPc ªNsڍq+\5몢ntUQEdUfqK$//z`a'rqsW2VMкDmLo'Jh(Q4*X~E4JhO!J#/)J݋fn$$O3C#׳lJh<4tit) .㋈6JŲ]+ocZ먛;L{X^6mk XZk!gt7f ƴǀ<˳5Ƃi0JӶ)z4cc㉓3\rn,tA]]D`DtŀhMJ:xQNHUƓYj ]Uf`Ei'gۡ+|Ppnݱj7nZ:R.(O 'wAUg{)]1``4tpQX UNWn؀]]IE GCW hJjNWN!apDtŀǺbZUE{PJ[ʌv4tUh/ROtute!gK3[@[9ڈ) |fw}8U *4cb$昖:b7T÷*Jm'kJ]UhFCW fl*Z9=RDW'HWα<-z瀝UKz,thCȉNva]1` 2\n,tj0e"]dǴUx2WAcXvCDW ]3΄#ǧ㱋wwB#wC{n(vZ'@W4վS@UhE3bRUE b+k]1`r4tU*=h5*J3YWHW Q j'+leEW IX ]1Z#':AҤSur,thCNtuteƹ;O^(}*KvxnvF~tN6 xgF;瀇]&#I8-edD~0va` bkiK2I͎lnT$S3ݵ] ~Uw̌|^VFy=p#^#KV-B{d,i\[Sae;A'SDW 8,g| 7-nLNWօShÅ㧫.BPBW.Eedl R.2K+Ek^])c;eӕ3zg^~&՞pa~h@4{]]%CO&ѕvq1tpY ])Zʒ50Y ])]@e+] ]19%ѕvn1tp=/mNW2J(ѕi1t,F])Z{JQʪN_M8#{.&dUr^ } &KZ`GYS~Jh}8vV >MhyRZ`+VHoERXP`?nrKɟ@ă=Q6$g7x{{^ *hT 'V:A6%EȆmX ])N(Z'NWүJ%(Lv1fh9~`֕NW̡WS~~G {h@C鏬+ZCO/bJ-L+u2XYʊMfIJ/wAc+E)a+$,X#XtJ$,I]QW,69st%Y)ҕ#JKs :'6advsq)hٯ]`ב?dyȓgk_5FH_Zdgק,-ZGvh'8]SÙs99RFs=`ur=@=~`2f1tp--:<-+].toYJr,fGiJQu)U%Aѕua)thSi4ut< /I]3RBW^])Ja>s+9=>B|?>j/DPz{Е]CO>z$R/nK+ d僇tu:te +bJzR);]çHtu:tEѕNDi)thJQ ҕ` `Of1tp] ])Z9z3(jO[]i/[cCW@[G t竚NYN|ҹu_ ǬO'giYe4&f^s]\^HwEg#cHŰm}{wٛ͛7wA;Ok?\wv!}}?v*Q&R}C%/|yyiDDјw-w/77W 8ݶi:@\/n?M|}'GcGȷ_|+*o}Mk{_|go7 lך#BHm_c7+ī?>/,@ 213p@/}m9pc*'3 /3;4!GjY/@o__7oukpqt[vl!_>~t*z7|+e)Ơws_>A]u$W׽ͷ |ݗ \op%{;2Uc ߒ8Jn ضKQ6\FE[ ɖ\bgR .\Mn=u!lmlF2i>ח~9ƎTgn|_dI4Is]o! X6S(r[EC$ZIDzϽ#Bm4DJ7V1D:Y40G'Ѣk#IJo ׫w>1Xa8Cd4;\dr\\\'ݘ$BgSwa'D34p' jhNHTcl|h~'"̒;D4b(Fˈ&fK `?O@>R;qnc0j!Khw`}hxh( + <4IB~o?\&+-SY(/`TIw-$*!wys鬪L/Cm>s3fjm38E3ɍD%[C5+ 8 Y BJқC:_|ӮćE ڢg:±e60|-S`1uV6PŅ̆) nAhJ}/s # D` HJƬG6lmb$̡m+"]hٻޠ$*!yd[X/H10'؁~oktuƆ;"RIX;#ĦfC Б$xqP, |#AIi4@iZ|W T7q)n3$Xy^E=5(!ȮĎ0w|&͐jPo]Ґ?;1l a̷.a iN(!2] 4E\RA[[K'HI?C7a:?%A0UZ&88)ԙ(͇ :7 d&N< pjVz bMEwg %RI9n,\ B!Ȭ=+& 2 }֑~: q*4Xv[w Rq&NRcFTݳ. RSg=ord0+=dS "&Qb;' Y,'@H=al! l∽}Ai!}p0׺~Ew 4tlHBrbh>uU%`J;L'"!pBK.}G<}gz_otYs?\D ޫUzxԭ fL{T^%y:dH}136u 2j i<#y0 /,,tA\x/ ) >@(&9 2e%+#`4ӥi0Ft%+6CFꖫv<o 1t1puaV% 9ա5ڞwt]b="eDEKrm xC$*j]J; 7k ) 1 jki, mM!g8Z.\1y y@'^oBZڝ+*brjFF^<-Qb8#,.@GF9LEYIbe!S._Sd~ȃ 8»x8"6x,=YeX ⬈)flk'jj,A2Zu_@+,&5 [5լ›Jk2魷f:x$c,eYn$lHu> ƑI43zcȵޜ6vٗi?{zs?6~Ϲ^Ivzvc0 憳===k$c˿{B{gSCŨծ[sz5Dv s )kn b P/3zvz4\}Ӭ2#lSk8n0Ȁ=T9P.ϰ 37X;NJdtLv LT0bk*g'@)aLYal+4Os+IēN96H~ wtH.P j ᅣw(U[T1@.>JH2pY/xp`\3c1&dzu m3WbdZH5kփ*HLJF]@Y,d+Z=K4ݳ y~58ڰ1Y]>@z5@R0z!@Nk (BufàekV3 _ V M F:,`pDzO^'aPJ↑ g…0ilp1:?kKepU].I!b!c-ٱ>іZqIp.X,;)Z͘-, vuuE0"NZP~dAApK7B\]nRult˦C._}9}??x=Ďmd#)ָ9o8{5 x?IZ?^CmҮRwtWW=5Gg^~zu?}/m>ٿ_~9lxeDk}?«LFد/Ƨ[~r&?vv=^=S=1_^۶=aH}Mf{7V¹sW~AP*;8,jszG_(zIT@&JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+JhZ+Jh&@cw'v'$f:X|5-KJ elɖ[UA;*됼~Df/j7cca3`tNf(. p=IߜiLl1\ɸ28A&Z&n֥[L9KS}3AoCo4ʥE·)|π{T+'Ja#+RiB4ep/hN T4 i(MCiJP44 i(MCiJP44 i(MCiJP44 i(MCiJP44 i(MCiJP44 i(MCiJӎX&!B`]5]RiH$4@)Jӎ#UXQ J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@:^%Tu#UC tG p_[[G%J @)*Q '*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@ǣ5R_4^77߿1z`%vIp j% %'(\:ҝCK! w2\BWmR2#+ ; Xv2\źBWmH#+c tj_tp5 ]emFc+k-]RU\vtQro4˩Ὣ[&LWtZCW4]tw+tܦgDr%ʀ- ]\AtW*etQrtutʼnv2`);CWg=EtњUFiҕ p!ʀ ]et%tQRtut%)#t03t]sWaHWIWF5os6.կ ^Mgq߮͞z)*` ^zNx|M1NfIfg,)-+(?gScTVQzFmNz1vP ?6}o4 tu:` 3FS.蠐J{?֓EJ=T`?yQaL(R93J䅴AYP(-%xO;{fK9I7HMQF\ gN__(ǽsݛ,6s^YUZRv)x&~IZ-*-C.-2JI(0''#ZN,SL2RJ-+YYhѼdҮ}nUvX'vȴ`jFiΫ`$M:( ,:ap/Mɋms%3}wsneI w'ëuorQ|;v :ǔ!u,),l"|!]d~,.7bKW n+ HDMT$HN'UJWI}M+W+oz*IإI XL"4p;3ZI%n9IM]kt2Z&NW%n9J2ԪNm/ughE7dR"]!]YƙP زDt2o/(9nrJl0avVy` .;xk7\vC)ZFWbHWmzʔeCt+jUF /rm)ȪCth-o;]JN(T.Pp9 ]eUF1N{%;/տ rrv'W"QZ "ҜRcE'쨱=Jn&mmId~ ia^7F_x)@X 0UR\ %TPaV_95B!wӟhq,Zwt}yW;y|pѶ[$z$r됳 g-Õ+ZFt۝R+t֎YcR:DW0 Wt2Z)NWHWGHW\+4ƥ%;Dv2Zz*HWHWBW7нU|CwkXW 2BNW%eHWGHWr)Y{uTssNHf(ۡȵ#1 r`՟~I#7!4⧕vjefJNl5~39ađ̬g~Wwm{ (dgp1_wl9[\/Ukp=IV6rxѻ?߽kI8x>oL{%N_EWPVIW*p[kM9yӿAH}?^BlI ~Z={~@#MI 7\dg0kS Ï*ȣ>,nݎLo|nqyo> |7!knCƸ4ٝ⺽}\qٙRꤢ``JFHD΂dƻH,QM"U%%2ߧ&s+~CXXxu6J,0_1F<(%/I DL BХ`*QfsEgx&q.{Zi 'EnY9#Á[3S7'Njf>Q/ճ Cws,l(֫6N VI;4=Y|'7X ~}6 )l ׆6Э.}(tr1K3ʍϠ^滎곻{W=vn+!@H6]j{w~;Wj9q?փ䡂HTFk'Kl*p3$ ʅpɄDRQ{KeL>S^Q0Qp,;l"̤Θ;T78O/Iսy?-PN2SJZgy.JWĪfw7v}oc[yh>?K<ۨdLgqa0fvYW =8']X] h0wNnNbrޫ% 2LF5#e< 79#xs^Y>/>JK4l"#8)o6%L&~hU!?l:_>g\ TS١5 XάfNhE3JFpխm"l.7iW1DW*ʽweҸ`Jc=NtD%-mzVD׳ J N48ihHV{e{B$uJ`JJ:'!0kct1*!2*̫]UM~DBCİYY)]10yopq05oFަҲ 8Ensx zjz.oI,3ogYG&MPkt~okBCdtsL} 0O9$}0%nʟSԏg9t~z6 !e )˹nc{/U3+[伖q{sU[}|zUmlVgD-Li.H,x8[HSN9 |xSjOL5ءY\p*@SM*FYN&ԥCq)yx=h鄷dR8RXI'??gwdzWFP,?(SQ%!4L'|R^{ B&)+b/Zi'r))\:ϭ$ s&AUI !=#Vc|bU/J؞ҽy:(BmL"ӍEze 9›~/^-ݿ7joF78<i .agxz9[j{YGYFDOg0"Rjz\6O>ٺݫEW&fgt!**(0=0UA5^Z:O|HO:=״@I 1|؛꬈J`$\{;޾<4VY#v`[k3hli~%~#S2mJ`J[ϔITsHL`!UbqDX 4k_! ^* ZS#e 1`䕱^T8Sk I)hЖ`I{+{}{J{z I"%l)MGSe*ް'8z O<Mx…rAYDF8Z Oqɘ 2 gDVypbXX>+*}}W:A)MV |d23T"J3x6'WKF7C5JOnwM.p^sfKq]`=(ީVתپ)һYH+1zp;FBMZ0 >TM>US[x:͵&f(ͳ7Kf9I{5kyЫXCq}?}|+g~6oze&oNc#ykU;n㵃6n*̷pcrgq 5 n[kKZsf z08R[ RȥbKuQZ^ڃ+84DRDˊ3W*JE+R/qf>6gHBؑ`$xX%x?d ӔS=m2?=Njz1Ioe6}bO*6S0W߇X"Y;ȨA'OdX3*aI"A]X/ .۷xާ{x_7zްtස}B h+ [PZkٕ +]R^.̝N3i wQ .|6L)R!R0u'bJ.z" i:Ez {ܒQcd㶅i4$Hg(QLkJ$<R'_Ѥ M5p($_L)o#_)2f(XQ aE>ii.ܸtע۳ɍ=q׺ރ䋵Og݆=ӮWyUIMQaϗ6 5ec>jSN*2q &iq"S-MѢ+)DcYۤbkTm% Kr9Yi5Ii Tkdl6G4*Ͱdl-.7`᮰F)yeܞYiv9w?ǧ'7ؒ*/(lu$6,T"xW-P,1F$Ez c^]D!(*ljrr'Wضb-1gj8#v8k&澠v3P{`AR DgFkoP;)@%unv[u6Bff(訳cIYD&#z17P)ifG{4x c[DTQ 8 }v۬Y$GP8( CDJ. RdaC[:`8NT8Rf+#[ &'!Fp!U}u6mqpq$0q+6*a2,A\S~PDV`M(.IǶxh!l;^27U# ! ew1LNy׵Mߡ %qhDCuYB |W . mdoכ;^C=e xQHY&ˬO2t)@ѕ& {t!Y%ȧ_%|]w7] _σaǿ|l3[1T^]UͩZϧuJز. Mr'uZN],O디SxvAJi4^8吒Xnr^ƷӜF>EH<:례"64䐊d`c-YeQ1Axd(6 ubS -1L80A"@eB%&a@"|c'j<2e(vg/(ES>}s$κ4_7';y-+!@1u7966X]LRU]6`T^Z_JDAdhwФu-k[QzEu \ x{uҵm0`o,8e]_vu #> [ Xމ tuKG]/7Ϸo=.o[D~'/I6N|?&COk??inף]yҟ4JߒȬ]V{c\d^E]ՔYM%+;ebΧ5Yzm@X P:LbP.e+ǚR5[4뮼|܂ΜMrĊ}[RfB4XdBAe,^eTkPp4m >r|7rqs ڽ1>qy! (^Kqw3T詮(ZS*]SIQAo WJCXcB7kWkd;0aZN0͙J<(3qoXxroJ%020c@(QS^l;;GgF(n9Q|{X6Vl'ShVANŐ DڐWKmɓcdCpʃA?,lڠ2IY*%t@!MLʌwt!9/K"g$lvJ3eCX,QPkePU3P)rA斆Fo0h !) T%SVk(cYIڳPEۇO׬qN&?fy\:y&Ut5˳"QrOя{(n5j_\j:Sv#|G?-q5*d=p9ޮ/s_s_[BhiXUoOn-3YdnԻz8G \43z,mǺSG__kYu)ЇSp}pnvhuQ:hF}t$?I5_w?^4·-$ճ7glUOH-88vVkfS_<}:c-?[F+a;JAR"ʵWqQ8wAů.AKQz_q"OPHn2|JՈz{vV>~E^)gURNڭ=*oCDɅdkW$:M{^}tM|@Ot"[W+s݇Z%j\H]U-eEfw~U^I^ 3zG_GuxGKOs6K}>vKn*W7SfwNxZLbr0!x*6?1db{`U$UaS€,@ޞ Zמ?fw^ұߑ&*e;m*21$ϯ1B<})nL&*nʺc[7]om]{?^\pk\O2R\Rp" ,d@t uA:-iU" ^Gd6YX4EPhR#Lg8o5V[[zI"h|t@YF!`<.xPG>^ +(8Mx"E~ Y~ZC& fmБ9Г(E”0!Fê l8_ Z~*7Ja5%$ k[Iy>f.@S",eJY9VC8 (~0o!)Hք`A䜏)P*\4uXYmyNS `m_M~ݪ@foowe3q r)YJj`|9iՎ yBŘ.d.X0&;-ՐVkp+q =;;Aq]E^H=O-,v"HP$68maޝ@-1|ki 減wgit=4K<>wi]7Sbw=y8bGeRq]gK (P 1lN::r"vV#'kPYfGu24mv'EحҞ&f<ΐ6Ģ*Ԛi^"2Z#GsFSDĤ6e6D1&Y@&ecQ5;]l8CFB=\JVҿ{n58hwʱX/c/D1lw4QE T*ddPP%U=]Mds=[-vxx?^i jĜT O`UY\I _@MMlD A);.7W1> wfCٺ[_Ō|:|cqi=q7?xmpx4u l(VR>D2eZ 9ap6s:Hly51A`PѩYKJ&%Ex,D-bkM*g_D TNq\V7FW#6B(p }qY]ms7+Syd*]M^.u|rj3I-#o_c8 ECk>Ȗ83Awu\ E)'$WX)BAI}H[aZl]CVzRTڲaG,/ ,] u3+q-Q|_Ҕ8 >T&0P,Q}&|dh4ωbt܎'Wwc52|z~ŎZMbTY2.^4 1ًn_]9c'͏ԥTw%:i[)aoE*UZ9pXz{+pʩfc`!ήɐ_UKzQ^WѵNz ϻM>F4j%V2MIs1@L'6,`9R"6b:rvEJ XEYd9|?Ͽi]ϫu2r[~:kGC_<ۧ6Ig~^"//xx[1驗R(*lV4<((y{iJ9M=MfqrM}stx7ݦhov \vcl6>"zA]?ii6HԵ/&[cʯ@4}xbɹo|ll6O p>]vlp5|bH~2hLVۧ76̽} ۄ݄cf1MioM걽zAK{`{Gϝ5?(<6zl[~xY4ͶZSE*Z:fX"h+CtT*q5 $zq7=__P[/xO+ TBA;e6sY\(LsĒHf_'q!<}z_GWROʷ^ ꅽ>6e( tB:*~B!_mdDs9Z䂓Z\,DKZHsh5':{~8.Q&ދL<9׌XAJhpG"H9u\0pf &YfڗH%NKƪd,quDI%U pAJ2;ށySa^m"iyT .xDJU>6[#hR41 DN\1!ʣAEQ*Q 4ȬLQ8z9[lN*v xe5RkXKJ)F4q%#<ⵖ.@  V#sX8M_WtaZoӂ$qC468is FXPKm A<ʉJ/Ql+PX:%:A/z;~ݝp{#lRMV{c]1TU$r2 a{8B=ظ hb2ܵ2=Clϱq%SCl 3(UGDW8p>h;]ejJ8ꔀQLWգjLt8#J tSPR# ORtpBW-#}^ ]10# WcM*TzHWPIU쾫X*ժtQ: +a$7d "`U|ǡ|HWrZz .fNn*mn껵6hcfDEgWNr_U5*U{s}G֔՜VDI8p3IA^GEvW7}S r$bSY cyvYIg#;o瀏~ n.rVYϼm^U92YN.TN_ q8fE-Ld."I2.R؟AIoȀxe2\Ǣ7 ZIzCF7@A昜:ˣ ͞WF+{Q+ J]!`'xKUˎFh9;]e t1yex-ܣq"gtQaE }v֦[Vvi['9w7IΧy}5n_e;i-ku)hAE,{|r{Zc>򭟘gaꛟ_].<Ivr i!҉+WdW%_$$JhI6BI+}TjݝQ~Cug^!XjsU',xgS:n`ևuC:A<ۭ^ UښGۯNswdk;*Ǘo=TizWK-zGmLiKzi3koǺ;M.s6!|v>N%2 \Xsj=2&*ݓK@v?c|^gX5>Iv\p>M97Z.'iߧ[EvȍPp 1PKwQ ygLP^*YncnMYJ6 ƃr}_"Uۣˇ8*UN=>j.gTBls|Voc'<߽b/dCҐ{RܚggNr'? =?grp|-JUWqẲ" Gd1P~ܳ.d9B#1*fw3;q:oV01J #HCeD?4@V<긦 F|dTĕAHc0CMxXfs-1xΒzV8Ye18ۓU"'"Y*3= MQqTVIyz9im9/?^s>~ JSDTڳD=<PhST 8#JːGF|o呩Oy佰LR84dJX! .l)nw:KʰDC*˖1Djb NP"q )C.ւ69k58+6]m\@=(gʪU"Oe7Z֊pI&ED FIT1 2偠er01вD ڠ\[5eK 2O}D3.9& j18;Tyq+9U'{^O}M泫6`C][÷ srn~l=S6rjo0Z^?^)+o|ឃc2!eG| D *^D:2BH@i 'zȑ!l :EU./MHRXNW)ƅfƾ\Hraw W/fغWj?7;Ѡx̿rƦhWvCQFœ2a(#ȥBg m,*SĐBQUB·Q܈F,6?eDV#>4.CU p@4N*D0Ι^!re5D$J 4A 2ϸDjԝpYlˋ0/^|hƃR` …MR(ъs%&@< @IH/>/͎}PC>N檙Ox཭I#!rFF=7D?>Sc ţ[9qVw7|zqb0H qY#HШ6A>] A׻ A"$9fd I$\"ɻ(h@ *".h\Be>^ċ*ž$ģʖe״qP(I!F"tdŠp18;l/u>&\`=g$#op-6?{WǑ ̌̌`,`5v_w IˢIMʖYnݤl(ʮ8Voь ^qb*NE9`6Ԙ X4*JhbGAY`*+ 妎yC)@ )^ޚ58(P{"ά86؍ކ LGRT-ZS@kp>EehFT">$Uۈ)7¡r`XMUVǒS(&M5$} |j*T,Y!T*fCdwMGC44-_ [PFS[V`*ڤ]kR 26!;-mO}ٲFN(&‚[As8ޣAq=hʃ͙5+Bs]n](v|E; y>uцk* uȶZ ƖuQekI RaVFb-}B4Y4d[WmFS%TEaZMY"};L@y@>M[Q}9wakɖ,>cen J&TZ$&NUJQs񋼊)]{bu鉵F]Kvj VTlS<}&*_z6x}a}2/2h]5> ɞCϞhnbsi n ZRKUs"BqֆePAQVlX%@qg .8@FwcG5%_7ԔJ/GΥ0Vn ́ ̡qlU[G&_E^j7:p Ÿ! CD,⍆y){WݣBѓsOf;PI +ŘvƫUAm*;[egf/LëV iB#TVw?suܘ[6nv4s]{/lw55mq #}!-ҜCyH|%w@9[Wi@9CEs:.\<ހ 'MGfA<0j*#/+ !Ȝ>dJD_c\HCɪ:Uj5V[Z:j&T!F/?+S{FP4pMCO>z z->zA9a y| &\ͩwtE j25ت͊ jn{mj`Qv?.F]jkz;ZdJJP8X mJlWHc&A57E(}z%F Ǔպ {4ﯺ~.5o[`o*6֡t$lԢ gxfY1̊CKO>@`P^brEH0iʷdtԳQqwk̆mҏ AE]nChbe 6P$g ^pgV<¨j Q U\u11u$p)UtQ9Os|zcUF3k/M(BPR6R1:d}-ZNJjVZœA?1UYh$ pFYR/9^+&r mg %S0QZa):=K}7`܄~b> Ggޞx?= ^ě/ݾC9ո&tq;]GI)Mjc*Xo n.{LI{p?\I-~|l0\2g~񈴤_xHvr}9Je2* K',=d[^)rЂ;?mx!6 ;?yO? XBQ9l~= Ԡ޿q6QuIҎG m?r8?m~|topLz2m J"66< Z_-9K:O&BᒯJR4]q)_MXRr|442Jl,Ha;em僉bVOeSj_|xMltt!}k[ީ_n=>p|)OI\ܦӰ?OU(Z']|ZV˿&if8?zi}oy0 w?y55w EKt^Թhq57]1Kmp\c!?ZgD{.&kÄwyŦlBͥ> h/o8^yox劃߈ј#6,r &nx.fzh1] *q|.Q;Aa\t(~t /&)dLNe3|aHQIG 5mi傆X3x֖DSΒ -֩(Ɓ c@b*Mƣ]/rzzSUoN^WM.EpfIsTHݿBz[G3BMJF^_f7ڒ|/Gs^QC/?~zl`,֋4r~i:-&+OUD.FʰewًbDV9k< |%c9Ь_(X¨]\z8n^{q׏Ut+vo zšߍ]a/E]LϜ-.[޿}1q5k^7ǹwnt|~&ӽ%\7<)Ek:}i&Dbbu?] LđN~y'}a 8[ro=Q.O[KZ17cJ1bElm'ӗzst9e ؗllF{eMhڽ)?B(~֠`2%c,q>LкE}(0Λu15M.)m"8,$wrmI,gJ x|IlM2E]TP&6>A߷{\P5{)b} mi XL #R)c1*,gyr f-ڪZ2SWlsD"RXT\AGZ6&_ / T\lyqNԾ-+BLk!?~ kqZݠV:;hoΠḀ#61`m6eT<db~tv LΊTl78'@C̴ \Z;hXũ3U佋%g Epgi)_BZTE%] .ޢ'+n7qɌ\-%-_,t}:Ic[wGa ;rX.12L[ɍ״cFw,|N],/|xZ8z暣D?~dٳx9^^SԔ=w#d|M"YLeqF>`1MKfMHWfԗ~ђ]gOnk5qO[_6867LiyYU:t|LUinVjw$卿l} ԱMʤX,?ڸ@-U[ " glP޾Rn/ ~ g Y(IQ/tV[d4A3i:K蕌D'&l|PxcLDc: O{<1}Wk븇/X [93"(ME/F"mS,GnS18- U4irޑf%ϲg-7rvZzZGT{c=;>$t X>LF~4)oNcy7h\%8v1S_-o,\ի=t jk}K܄@;!-:EFWICE:ei;)V7, 6O-!ǢP\%4Gͮ7z9iS)~(_nC[ꢍm#FEOn6z|EhEEC_(UP;gwJßI\|(]y~|eק &uu/l;:zm E4>_u7 unAސKhO;D>HZӼPW\1{'#rsu oj~ѵf{s5'txwS1F֬ꣻ^lokKp$~ix tP|J@.Zmʺh,"ݘ%kOeh`u-e ;/j_04Cm#t/W>zVѳgֿ}CE)3 L&(rVroI$Ui" ҺYVʗ.66K5 EĐEI^m}7덜ǝDwQZ=ßNwP{<+4jQx?uTɸ/p%vy3ge{P3XN]ZZ(/d5e.J8$dg rD,˲Ig IWc(L K^5D˨5H )K>&e$3V "9cddmsٞ5q]gCRbM'l>ymfMp} T /Uo`2z]nJ ,&e`Z*m zń.~{~,W٣wV'm.Ip,*8t6}FIsr!TґP9Kz "oˆXw~o+,K#jgYFs4, K;Ks~, ,`;Kȏ  \5U<\5T \)oiC%tKϫ6NuWeCl4v6L@ԙgLVa_~eHIs꾣jN1#zq˕% o?oGD0h`+ձtVCB7 aCIZm%f۫D?vw7!F_&4ttr5Ό澘i{S}n׬FreQ_{Ĉ]L(,NדmtOT|6Ϛh.O߿ Q<.K)6}* 6g~G]odl*`l4,AX[©d8MVȆMc@.uk^yK I]6]~YQ띕^ӢgVK)} ںku9`?q HESh,"<Pա[4JE-#*sDpE۶QU!S5WJ+zp%#BK<W\<Pk_LTڍX\YAU!h6qHWZWJ=՛+olmR2l͛>*fCwMuফ6b|إFSvB#{ּ5k*<$:Ϊ?ׯǟ:{Z]F!N5^ocXyCzon8;ecL%Ҟ>Y!T&m 6;aq1%6 PEc1Th U4*Cu':=AVq=Th U4*CPEc1Th U4*C۲7WFu84W9q/Z{{8q{zSmcT{BS#Q۱;q&pC nh74 M޲ Vz4k^<\کOq^hy\3gVWY[M. $CN:- ~_q dPJǺOv<@Iԟ(d4-SPTe <5$Z FfI&2EN$cc! Y9=e(gO19xHyIjJ"^Eo гF゚.MYi~M lt}ɭEVrJY)-y~_rTYU㡶&d腊 ̧EtHZ`*)<׹Wdhr{V*jD6.zR=3i[ѥ93)j\R}#co<*aao/~o—[˫OReE)__ƣGlNvV{9-s0*a`$u`9DE6Wk%qo< sQ0齐&h'G#l4l;6 Rf` }8b$nڽqǀڃ{x2䬜2ptAVdLhi Ԍq0>G<샕&eI3@VtĂ&K YEFj&N덜1꧟|> bo/"Q 8 i ʐͪ菘X4<2#W9S!6`!@N9YD̉i߯hL 8)fmD.8I'͍LYtC(CzFy\&jOXZ7.g\.ǣSJhV2h CnL%h$IBp%pP7Ux @X(Gq CEe?7kR&S؈Ft,ԂGPh棡#TAYPe :do}-mtY ӔBE׌`y"SхFCuI YQP,9 pjȍ~N3~ͻ̢2Z^VWpzR~ql5Ъ t2SK;~3Ƚs@jgDlb,}2 ୍a1@ȨU4 oUZ#vNik];im5ĮF^c>UiX}Ljb9\ÞTo"78G߽{ogZHBnces.,paZ&ez8HI()QؐbO=~U]Ux7O>okxnW7P} k#|rT6D<ٹ b*\q!\M\*P) -/7UYUYY}PI(-^(EX\j"8-Ac:gߍfYc+|fF:m&>|OY[j-I:"(L'ڌF#1:e$d.l-8 o۷ x}6|}=!^制g{ ِ0Κ0دMcu=/kMVXER̐RlL\Y,COY]>8B '܊*+[¼Ұd8;*i-TtӝW̶ŁYZu7]wf|.w'ڛrVS%aZІ@BhBN2>P]pv>r:3S#wn|]gsa Sn]Yw ^+18^yhq\y }'?EGI|QN//8ry(j-xqPQTWn7I.Z DYy `$EYk|6YvoNOpZJ'$=Ȓ]Hnn&n>va_Z0]ސjuCGoZ9XL~]xX>`Wg~10?qˋ/hI hnb N^_&8tY~zqIIk?J?Ɲ*DYR06p)4 Q5޲bp>vkmeY?W[e;P`c/Z$jo 1KP$ R] uL u2ƹȹSQ )XԙEHAݣBJtZsYqU6JHO;Y`GXc̵sIGydD_%V^j}Kw_nzGL19Rw4e*CIY26i"R0H&;n}==bZb5߇:o%m@JrĺF^#D#^+^1Gsyme0J$KICs.Ot_:!*j||O5!Y~ryJBoHm=?w$E҂q!KvbZ:/ڜڴGQJ&e,_좵HbAV%8-|m+»N4]郶Ƕ/}:껪ɸi<ќ+Ig _xD|u}Xy0,rU-mozo񸣂V< >~h&9ݓ_M}s6fmِ?쒸ܦ_B")k;2#o'\/_{`:s=5:tKӽӰ97.|:~v7$ q;u?{Ƥ݄قM`NQ\rǃ.|vNʚ_#6FlOyɒ$D4[oM6 K@GaH= ώ,SH~{B*luppD0&!0'<!sgaW@ȚkH_b,J+Y$q⍍79Ľ^t.y{C(|{[WąG. *WH?W4+V[kfe{G8Zyqѝ̳׊Q1v!D)8֧ V$cC2ح8QE&mԽVܣ SM{9;a+45?(ʼȕeqs#2ri3ic,{i1vD6cȪx.pLE֠d@`L@ɼ9;Y+W_1[L dipd[aYB0QpN3ys. F;oWJ:"o;E~f&f't(r֡Ԛ>K%s"J4 Kԟӏ̥A_49 CA8+Iϋߎ;%-{2Sі-8 :t# '1&s K<&{QEqU*Z5+tQ~&D9k{-1&֎ P3,h2B륭^DK26%^X+|Nv3*h XYV&t>&ylx0!}bx/cƚ89r,QC,=A/fŰF?:JFoNDCvȪWۥ[sOZ Q2N }$ځMY]% '"(TQLgfőADj=JϞ-Y~UNjD9+ c[R~\jL9>)4yQFfm-\pJpQh ,㥐,ڨ9;vҦbFdk٘_yPDPgQ2hE$NY:Y/`g_Ʊ2M;틇h"pNv$ oݮh# dHx_8AuN@ԟ~KzUm7>[rt%qfbױ7{͘j&a'mܫmV-#\ kgI~jp1EXGXvp1.r%r36b)x[iY4;<ѸRS]/9f_]+'I3xK\r{zjYkM}\=p3Jlyw 0{M{)=ͳ&_wF`վsf_泿|5Q_l؞n~GQP(KٕJ F~XeVED6)Rh}V]p3ik |>*PT `LQ h 0Q$hsLf@2c4* *PSg^:G5oo>ۥ˹ͳh:L 43f>j(sbۅ=, %tNg{ N *m,P#BM[[lؽ.iSV{Ay #8}k[MEYDTWSkN3 "5Oqs "Dg%訦 RWS2T[5~[C&z ]h-*!䚝 WYPQ=Z@q4>JDԘ/D)ÌV LEz5E ) ˢ$>Z4f}}RS@ " a*b gR&'lSY G]].n7[П};3S܁:x9;νEg|0uƼ8CK0xQcX|F{S9@䣗ln]^޹'l"U*d DcoRu9e+YDT ($bm"zU,NK_ UN:QDpR㞃y4GwNuCO|0S6=n gG>``qE]*)::Pt6-Wg™˲^IM!tlŮA^<|[^>cRV,?' >/[xJN%ET.S)bIþ{rqг2z|&<} UiH*[ߏǶ߻lL?-3y-еu5gÊv5}e0[gN/^pg2=ss^wN/}ŷ|yH^D'3Z qy1[_;^i@Xn1r\Zc WLj+5x4ગ[UbU WG+F-Whn+crn1}R#`Rτ5j7[3&Y΅qFwOn71}{sz\ٻ:\xۺ# *",{_-?( ߞ믇 :r d_tfz@(-_- /xsy>>&jA*_h9@_3ADtNX;晴ǚUyw[;#HLrh%n`ƌ=n`NMq V{DF]3bV+VG+R BN:B\9RbCb1/M+"JȱU]#QVpłbε+sq*L:B\ ["ڴ3rm3CT NjWH0|q5H5x`\ >HQ4L=N`ZhW`AH X2՚U:=q%RkJ Xc4vJYkhW,C3"\+bʌWRO:F\iQp- NE H-Umς+h5% t3siʟxFK[9ܘi-[ȧizayjE KY\gOHR~t/__4,e&j]6ؽʿWOK8"-L+AOY{Z/ *yCgt:'Yn,I^\[VEK^Eh; vR =ɲʻɲuʦMNh"Rq62&-+ 5#KZr,صӒ#ZVZrVXO~-9봒-lW$WC3XcS196+lu Z`b'\!NpłmW,4+VF+V̈́#zPRcSk\\ VkGw*2p';^{j`{vV4#zU W-xj`0\COV;XrW#6]IBc3|iWVUZpuRo)"c}f 5ubJO:B\itpEj X}h`O:\zcѝ'% D9t.nacQ'|A pdlI tHcć5Rjɦ쿾lpu+~t-3q}{s"QKˎκc7;,7uJj{HSs04+lzlp:VkF?V*puʆphg* Xctv płm;7YkW(q*a՗+xDH) ,}W*uX\ Sk4`J7P0W0jWӃ4ފpEѵ+}+icbJJ/nW,]3" l+bZWN:F\)} ; X.VpEjUJ3qHT XiW֩U"L:B\pbGwU'j~Jyҭ=dxb8M# ~f/4~gs uY`+<3G[ym 5QV5Ԓ3Zΰ %Gj5~XTcچpEAfprlW֍TJ&\!vJ]`X.6+Rg!J-'\!5+lzp9֍1*q1 䓐+Ф) Vq*pJ>~ԇ_a\Ltj1J;6+t;˵J\W[ FkfpEjqL Wǀ{#'-n7۫LKqD˱(iUIW7~zϟ~iy9+X;wgǒ{w@oû6*7t+]pYytu~ q/W)qp]\.Y7:Zj>i[Bct׿+\tzv#//֨{/~0x_a~sy嵳TR-S3 o,?,WxV]?GrDyD(T.$T~Ƽ_{-[8Ѓ;9p|5Gy't#ޯ!N)i6# \.7/.UFdy:HcKWRZz,#؈* De!YS`߼w}ެ$IYy=*Eɯ _od.E. PmF]|tk'"dv^Ym4Q ٠7Vnk_PBHJBi\1˔n̠نd;2TH5lYRE&H6eY&PE9hUFBzm֕J!^Kd%s"c h]@R22 f5TM.JU-e-gQ顎.o/(Krʺjp:j I"ER8EHe\K\KLf|'y3Ř$7*&YkiI'Q<Ceg'HG9F-r\[ eޮ+J! fpvK0)Ǜ "oC !J"*IG$8}L Mʨk&.hg)$A—th >0bPak: d d2RZGH^>FEvXV *%Vme)(T-B/3O\o$'mA(|ATƩZ_R(&EjZS1ըJq$@-EVQ|C0H}T ((*XP{JIy- 塖 [ (PCU.cA%7EČ.:K^$l[ ]( JB$Վ=&xuP#TS4H՗RDVF[D}.R"(5KPDb sVPES̡l*vV7RPnn)Aǒ EyF9+e CiQQAȵlUЦM,>d+ 8⢬T ZuvZLsL^[˪DU9NV8ٻ6,WcmZ@0HfXL0Id iCZr﹤Dѱ2;(V,uu k9 zPVPB ׎]SPP&V+¤T)]b `; n!\"U0VjVa   ʄWs42, |Der HM$P **ӉPu5%M%$`AW+dWVb@,AnTzCZq2Fͷ6Qc @ mn;clQȌ<B*nYy$^Xwa!6K17)_JJ ufM0*Qkє2yW nUA׮z|ou/ Li_2=DV&xs:p~~ttV2Mt%C RbtLECNZ,J茸pΠ/AXźۆjbPb$Sa[~u'K =Z`Ye>: ]{ #BK|u)}uԆt@_C݅Zc`8Bk ) 3E`Jɣf !nK)5Y$iNJ@K^Bk֐h U3ڄ`1ctGJ߳yp (Lg@ @$B'e#kvqAصF!tJ35)ٕ P?AjDPqkUQeXYvnjpM"β] [%VmZb nt#liVԀʬ$޲[6RSҥ*x/GT^`!-A;j QH6 W|ZR0]UK mry?Xy}-ns貼`y ˴żm9Qe,=ij0fU:9 6IҢGJ${Pv%[Y:zm5EZSRP'ՓFhׄ bL#@F~6l}ˣʌؓU: <%2`Crh[SP.O(7"Vh8(>D+Y,w]zʠUbF =>A( =`-f=*-+P!>c7';"($'Mm2X4kno!gE?I bZ0j`R+JQYTPcD&7cQGQ1aaY;p1-0'GJFdU@MtVkA֫1xdmQi1  jNàe+J ֚'zʤ躐X_?Д n F{k|T(Z{*(=uGm(6 qLU׈tJC` l) ݬOHˡ옅jIzpPdpN %>vRՒZ Y\ yۨO]gF358F'~`AA(Ջ6B^ܼX6[{bt A#2RQ-goQ۝C AX=l ` tz+~)0֫'/VSz{}\Z]nOf:_'7.ب'6.׺5.6ȧ1.+qZ%}qIqS/U0Ҏ؊0"ZNW2jgHW#I]tq4tEp ]'OW@]ip#+ƣS+B%3+cc* ].BW@~tE(%HWVA0}?G׿nCy ˆ䋝+! ^.W:;&~gj֊ќarn?c)~*uZh?~kw8;*>Yy͋kjAXf^ B$LO oV20_,Wey~w\ļ6X?ON 3f'+H_h5Vԙx@[nZo煨l ^[7nW ҷ/EʒUSѢ[3T-"[bF1rwhLl6cCIk=F>b޶k{3d4}0E!ʹ)Ĩ[YƢY}9|Yڒc:&E7T@Ieb2&83JvM%Gh=JJTrT`CW׌u'P> %cRW؍g'}˓I(w2`Ѧ)F !X+:u"w!KQGy|FD S-b^{hȌyRKuϓYwF<4yA{SAkw'fYvIa N[]U,W/E>2,1\//ߴL\07yli|yny.KKMPC.?zUHfal#R&׭N'ٌO-_,?gw_B((۳- }]&0hbZg?rh/$7ϥXRCr7=2lʻ&PM3%zv+߶ r"z|5YϤ܎CAo*95O>ީ|INY`{wwO~t'!iйKs%%AA_]/rl:">B|z:OS؇Sa9BEF)u2UCGTIiZՒm&p/Dr QԕPazci]ːkK]WХQKJFt5+׿ܱTݧ*Kn4`3H-)i I% ({jm bzft7@P6F9{0!uqIO5+ltm7pl6Iԅow_ck[C=Sܴz;h @St BZdF6$1AAB)#1_dnJ["Ā._qAM`bS%f 0'-9HFU-8-x+ x6ޕvbaDI:$$å֟vos2YCn_:qB&wpT|'f֯m=i,}u}s܀9FN)t{6l3ɋ֧99Jq/9Z9$Y3ǘc|Gh<9 Ki3z2Sb҄%wCβ "H̙"#H)ĈҘ,UD K2:GfxͿSQ7xn? ޻(G^1RGMI#×"~CWI>v?W1yjX^auS99؇A_l_o=} '| wR%XWkU=CWoT!]`?BJJ^[>-jBnz?ָF0꼹4+)1vqٻ/u1_p^bu2*,nEQ;.Wǀ&w.f/c5q8-G>GL7W:͎sJW=#^3tBm>eӽuҚSYdRT.xtutz~IKGq'zﳙ|Y~k#C o=+y.'?02=caCQ }ZmyHL!""q 0C}"mN2,h#bو7mRU7]z؈ nDtRYެIKѴ[(.#/Rbg4.d9Y_rLDSgRb'ʲ;J*{N'Uzu?A.losNK'dM@2C9 zz1%*c9nȶ[0d_h:*U!@2ZJp! K'-lbݾz^k9=qdN}-$-V3"}=/6 T u1wiu}>ka6[NxF B3`v|8f`킱ھ3ӊ"J:N2NT(lu|ٔASEPϰ'Tt۲~Q" }oL5_qL$Eg`U6dRuY+qþ46[foQ`/|J2b0%~ E_~=l>U@cRo% t & D[XCK)ih!AkVZFːTJ(ߚ0 @fGYYٜQB4`j[i&Α1d˱Q`)8`}fPh jtIDc):l4?V.tB]Ƃe1I(!IsTG4?10GWE>D 貤<ٮzj˲1EVBR(iY(m2DpZgԂ} `rF(V* `cdcyv綵_$p,pzn w Z9[Zо:(xLa kz CuV7nz3/W&T_Х<~;Dʩ34Pwō*Nu" e AHv,dEHNiJ&p6JOYq'{]o1>iZlZV1=:|%OZu}r~tuk~Ƚ>]wŻdG>ʾm'=P>>uܳ#w?S};j -^ԁkhZJHN+\; 5x)[?g@\;:VsR(hiRRIn'ެD 9&RH#He,HBd^HJyM. I e̓jX *&Jk2%J02s8V>qVR]~G侫8cW{M߼ɹ}>5Lh@l%;vA -ltj)3/ xgݻt>{]ׇo  :-um(۱5 6N*|PgPP N: Eɋ+rJcO G12yHA^lt])D[s=1|D։J6/o1}ARuhrWk ͜BlJFAϖRA dQjL*+g BE$eSCFxd䋉)me堒LDY"V颥轀PsPY<{6ӓn|͛k60^pgGB' QYBcm5F>eRQ'v _'D4|!*1z`k]ONPSaE!bAJkBa'36爭4ͰJ3_L3N/\7sw*2\|ydyZ/ta<~w?ǖ̫\P钼%&N*b02P(,s,i1:Ij\m{^hcd&QtJ62y s;Xsd\DVq:5y)^v ^{@JN4ZW֞߂';)&R)۾A6w[0Cfd( 褳g_CY$ǔ}bݒ+fAo9{RJDVx)~Uc8xĽYϜO`"u5ƈ$4ɔ\2db[|IanR7T?)&0e'ΐ3ie^bj忙8GT|#|Ku6ӒSiO!X2pڄb=xφ@wJlbd `[~@$6ÖPxx)vmchNgpa i0}EpCT?kfXaA9ckG9cJ Xa)q\ጕ_/=XP(L;utuGL(1~(9zlU_}kh18c @x<`@ہ1= iΧBY"2.ЁƔd,vuْM-43I`mĮ([#* Ps-}2$΁p 3&uD2٦K]B:BjkOV^powO5C}c[vG{'gk%os7tk1UedV0ɝ7H֍}սoTt^I); v(4X^y$wfJ]v3;k3$:S3di AcQK5mr"]0Y#,9)1%KFl a8 yyu>P^Z_G׼~_2yksR"ϨnfMAL('ڲBĨBD/bZ C#&PPK;-tc^?R_;؛m]ǽ)Yzzo;qW/1޷z7BN:w77`/{zB>WU E6M 8o&\?]7{RR.ui;\'b_mYl =&qACfLB UT6Ifxn݆ѡL(b @-x'|t~2u Gh >v`*ֿ*7 qj,:S/c7 Ej 0`qqiec4$ 4>wʇzgH- Q۔sA`06E6{ @ "x1& 9RtIOl(OEYoV,a/:Oqo*-k4z JYpgB!2"g&iB.YD,0'9Y*0NЙA=ۯYbP` H\++mL_K@K304)`D7;jۮ9#尩t;*ͫYd쐈hox+^dMLOlS1N0&213IND1«tm`p/pՀO_ti~r7N|N2ې1d+E3tNCRT)+Hau?4Vi)ݡ۝WRf~7tr~i>tx|\l A^3a2]''%-1IjvT*7Jf}k_TS#+?%wU ^68 -. 0)+IH9q!ԃYCkd2aOMw=~U]]5]"_9r !j.&[ֆkU縨 vuq23rq;fU*~1<~}EI _R[vHzbbaZɐڼ0_]VBME]۴GQk$b'Mv!k~Zr$)8K/UN4nA[_ص wweg6IWGs~k&V'47u[l5+Rk~:!Fv-3gxI\[(\3wΗMwy{\\}'&jcf;kMp-7Z !;WJD1boK‘%Ȥ3w6(RhСREʥ]q219,}vc䥗|`Bi-xll')HM*J9&}2$^P$r$ E+NF׋ҫi$7س!> [qG\8x]u o)''鄴K‚24)u(M +ޤR&/I!{Ҹ :ܶGKBt:4hcRH(Oc-'^ape'-6RiVxAah-Y"Yz1#Zˠ-usW$baY$m(1SHBx DNKyoPܳʳz1oAl3mK~Nn>uGv-?ftǽЧ*YzW"& )djP"237dJqhKfv?[-J4#6Rd(ShcCdurRJ9Ig$j~σ xxtco@ HWg  AJ9  f_\{Fs7:t0pO7d+0;R*gaֺ^nUe;n1^G^U^kt¨APjv.$`2&(ʺhGMrWƪ7 Xo:z*WNhoBJ$/A!Q,K [=΋SZ v ULHc]@BCX4  d-<7r)g\*EFZ,Gm Dh P4 (E(DT ,X`1LB"Ч|ZSb!eCuE^QgEgk K'$E2P7bCmųhDWI:3)#j2Pk%DlEq *s7+su9Qn|3͈;k{V$ي\B9rJRE (i""KM= =" Y\'|H 8O@EfQ)OAhYEBO31J 2uw,Fj7ʐ]Cb3\[Uc]JۑC@0叜2a鼞M4i l&zablAeQ"9,uQ8̟'6;_>Dŷ+5t27qS5:^fٓ[xq>~:㷏_[fH*M7UDfelk>]ʴZ-> Om+;kz Ξ<ז=^o~ըש03z?4yZmoyEMs+p.oi3酃76 XL.$,C۪{Ii[-ʺ7pX{(}Fp2x) yĆQmBF0֠NK5FLG`^}nܣKKALBJ"6 ,v"HoBht%B &@pNCaܝQ{_FQBvFb J 0MdRgP9hpC! >|[Pמv'} vjp6ldt E][_.8BGdii;i]%w{{-* ]ڱ liGJ0;si:_IC+౥QlИM㝈 (@F퉺aw8nvr@ %vМ.BE[U5{XxY)m 효kyEDLJ`ά>j9bm Jmi c#]I(BPХs3NVoGvޞҞ؊vxu1z}zpO G6gD0={kz֩F%#x0z.TF : uU q0ꪒKx(ꊩUJ]]U*5+; uU Fs0uT TWZ E*\JšJ%Q]@uebQW\s0R;UĨ^EʓOU״OF!ֻt ,u0:E |5M濝]\.p5nu _)@;__Ai eJ)cSRJҥ"#SZ~!3^Mһ^rFGdKrZBR( иF%JȔbq]+gI؃\Ib+Ox_s>>҇#rF aE]Ujw*fܩ{ xH 6p6+V ]]U*qDW/Q] yP VpB\}0R;8?fo]˯?{J}m̟/<b˖l)*C$1RALlkĢѣ( Z/|~v Qצդ=mO3ٿ~DŽOBX``,00 cX`p,0ikXx,ؼg2'b`s/l P_6k=&Wwp2a7 ZzvMD JQJksfP-@ees*yNZP%z^^ QA37QbrCʾ((R(]Yo#Ir+|5<#bae 1ԜHHu;xh)5ԒdVTU_Dše:Q Q1nF΁qnnv37J?^8lǽsbnٗ9#u^}xz_̷(b[KTuʠeY2Y# UMdE?h7Wl<(A1|b"#ITf| `)D< %N GUhTv(]."A{҃O)iX8i-" `tNQB(5#瀩!Aly~3<۾[;xm麗VsR;g3o)pcDI=(E[`}R) 3V|΅tAO3Y|N٩zi1P2ɕ w Ea)K¦(d^ȃiy*\xUქAmS9~R_LE(!U2((#ڐuv.j7~BB chCf-CC"Pڳ FxZF7/y񣃷Wfjj m8};8_( p}|1rUo~znlR%sSYpo.[r۫>Wj M&V\#Ғ_8$;iղJuGJ]=aiy',G7"9sffetigֆ7w>?_.FcTٿ[VDW/ok>§PO,yYu?u26ՔyOz5g-;i2;..jrV%|ME-4.B=_+^um|]ֶ~UT39or{=^bt9eG7?vV_l}Uv}f<yBG_˶Poy,+N(8zUgop9K\ZNjMw;/oLS==frhuߝMx۝f7+Qî1Iq2xҥ͞ Z<{oJYٌפÄwcU%ߢWt@kxu~N"Z0DCڔ28oxGRzEl-9ѡE mK9\KdD)e7D5'~,Fh^}/ڠX bTk%:u3Й DB~(<~cuܨrNMbdcȸEbm-ɈhH) J ˨z_H2&-2|HI! z#ENV9$z\{y\\P<b<|{Oj6}!| r }Ϝ~ ^ulWYS-z1iR4%G=mv ߘ>r}chKg8jƑ5g f)6!%@rt&HTKrLնHWe2/ɬY`t`h~` f<(=8F^|Fe\Ήqئ;?hmw{󎃬~!z@}[rF(n6:ӂJsz)8 2`w*xOYG([4.ArI\i& AZjì,Or9J TrB z+!rɂ&\VG$ )'^̶iE^(,9;6d%ȫr֌嬗~1[aZtx-H+rEa;gi{O"}5xV^( -E+E}`&D92վH5s"l -Zߪwі$=j5ZqquB'},~;픴Bl. N* = $T2bm KLh8N*O:˶LE*<>L3"%Z!2!cÐ%43!2"4."kqoj3_39V\cs+;@<UK a,r*eO)Et!~7{Іhb$A<ډuq:;h!J6#AoհF:_~/%ғO<.ݙ{qNHM>(KxlQS_DTr\>v*U8?eT;S0[oq$Ԛo*_N Q3N :*v5謂9?5}xv g1#yeJlkAr@a6::Ik+-uk'/vȤq(gR$_8?# JlS;!95Nt+r|ѦDan;nˣ閍΋ƢNPd-h sFT"6!Q>#slx_+w_tjCg3#rJIDDdUJ2 3TɓDyأCeߐ"RڞoA" bт"+P2(XB1|<?nn/tPOmDb&Hs&N*l5RT 50~ "XR 24a7!Z+@.a!16oǔ\1?0{fA}ӦۙCxj8kk<ײhJ7_NjL>;v=-P|:g~%c|jL%K<=T+^.^\+juT 9#*l.W3M] F[iR zm7/:v~lz8_39Q-{3C`sk<0)O_ 1b#t;F1  V&;r%yp uwT(*:3k?DD^xTNK-T:2Op )(`h js' ;CguiWCKmn3c#PnҷvPv¢خq#-oƳEeyG k QrttDىEtWA"|/H B ʊ>id`FA9W y1nԉrPTrŘPU/u.Vk#b,3 uBS<$zN yJ{O|5}?Ymہ|?3jo%| nAЇgݲF=y=QN_&ߚD9ӛ:RvzykS!)j#fQ}SqQjzC~#x2PTBYFrZ"Y)V p"?2dM2dVAԦ\`HJ.T;am!Q1BlF΁onv3+jrn_WD|>Z߶]a*嚕{o/׭U E4Yl*ɧd$1!5WDIxLw`J@f+{Jk ɗ!dd4*b*Q܌%x+f:]])5~]io#G+>`ZyD^X;clhMʤv{}#xJDI(:#+3_ƋcN7 &Y>TLkUY^i1GrJ ^T]qPk|)i9sl]}qw~5eRfRg%(&c?[r*80"i^:w>z}y>!~JIG* .B̐SP k]5럀V>ts"'|WMfEQ!_Q~VlI5'j13)j!m92j($Ta5Wg9~{Ⳟk>E)nS\iNkWۀWuv 1~ c˯o ֖d럺z~͆u+lUsK'UV{Epց-եʹG^hn>)3[xuWWW??pC<\ޯY?j:Z}Γ?iY-t!r6?ov~یyB~Z}Ox|hΚ?t09.yJKWg5Pޖ0,mZ&-m64ʧ DShR]\`\"e8*!T`H\ұR( DS;Wjm8*cI8&cċ(Wo$7;o 4,(oGЊ܇dܻx}dh.n7GODm<(sj1f@d[ 0cexE ʰ H2>9-h+D<Q\pJ'Sh_:08[>מبmS.';%]XJOXqa9 GPȲe sH)F-FT_t" L4xO(2JR9E@l 'zBBm+tJ`U Jn)`Tid,&XNW)FƮXHby Wvsf|^׉' F_ GlJ EN9ɓW" giXP@SrHОr*lD 3dg#9YM.or;Mbȑc"^yh(f[PP`{-'&*g4W8fZ \ ;s /} EN ȐQ3,kB$^"e\ 5D flb<5N30]FD!b Ut$AYp(Ν6hIbXR$ҕ HȍNM򠀥3.8!(Q{LVQҹeEnO!.NfKmu%"Eb\BȜR ԀpI(NlQ9hsEMË VSc~l bwя!@N#7?ߝ4%N㑿ɏ ?E.!hu5yߏp>^ q?qjBFL Ux/Ԙ$,P*Ww2H,zbԚs:A 10Z:q)q \Orhm-$._t! @%͍g&g:HH8BI)i#%m Ės.R{ ͔"LRJIVx* p#yCI>{`|T6K#T3ݤXlj=K;OuRs)c@넇hUB"oX *3 .)wW&_]sL9"2I8Ai0.NKΒO9A!F#=*1 èc-csPE[5=^qBm-p13>9Ѓ{}{>OuXuNX!$HP]/%J^;W%Y(9V4 Szݸy; \>6g_Ԙ|ؼ3{s|OѐbwE\Qg-ךRjgІH_$с>i.YFﱅVk&pnqnݛ?5)|~t=52ޗOqXǷk{o󛺾雟ї,<q|GkoO3/ ]قw յZRΚzj2Z+R[-JL*Uv SV:tfo*]i"uqf+I SW4"t>ԒN[hս[V>.h1R )WQ@6QV' *u\|u_t*H\@Dd34rgRXfAQ1xΒj=+RLmUljSm]7V7>7,ɖUF7ZɘQw.ԁvx뜏 gT%9`lId8fD*8)k"rĴQ-2yKWtښڦ!%;0(D9+Z Yv33U{0(wH9wX#+\zRA֦˰Hl\]"?hEuF[Zm0Qyy4gҴ!L ; L˚^χ|*-@ Lu.yKnB) <1T09uu(:zLAײY0:z.|BID 1ȝ "锨s;B!OӵZwG PfƷGZpգߛon|r--v[LfxqUot3\*C>7?O}pqa]ٍ*1'V%SArʊkabJ@;-05kw ;hN(|'YTp#0DCt!Q $kA2OS DMjx1ekwSZ=۳S\Oq ڬkuZ{ϱdg lt 箷Ǡ1V>P I{fKIj(4Vk]8ö>]O]Ooҩ#YjopWN |d?L 0Uv^KQWI0] UEVR8Nơh9.s mw:+$vr|,73,-1qm$ a9Z21)2<)&ep[JHrù#djR÷.uBQ0Q,X:YY\IrVVWB)*gY@8TJ \Lup Jr ;J},p5KRJ:zp(] Yb˛Z\USt(WIdx~fķi!Y1q&j!_矾yK~MdqqNyFtEXݻ;G\n({ob~7f48@K@{;Ko43.۵Wc|!}þ0ч}.b Mb9|^7Eo<qi _/ 'ʞW' N>l'&ʘ6.YT-ѳzyWe∘\ g)&'9&v&'s1W㨼\\G \eq%;jvn[UU~Lve5>x0NN7*~)NC>| 9DxeUzv;/ O[|5Tci,۳[ Ͼ=N`xn}n8it!ze4_?b!B{sjyo.>σóޟ~w,aݓw{ܓRw'b_pZ6P퍏w(5QpGdb o5Iy8 = z_#4yKllfZès򭽲xx/^gFϣ8P=+U4 eЂI R 4iR &@`T_{ԊwROCԧov)χR:ëYRQ0}l=}o&ԟ/dy9۸jJ>jP9*_ ͂l.&ӽIuq9qT2jV*0$T.XQ)KUqT;"mTAe{*$K&Ig#^D0z$7;om#GE1⫁p;{ؙ>"Odɱd{+Zl-El9[Rw!UAvMti b5ĺEMJ>Ѡ~ܥArHBK i2rwlЕU3gbBf{i_x=I*3Pd}vI$G EBxa9I%bEZr6UIf`NNF6&-ߊ0yyL4XK ( 0+j5sj:+SWM U:&^>jo=M}O47`Zle%J\\ z ln;|9=_vB&9ؖ+Yl=)-x%eFl T*Xې|J([b1 X*)cd>\51JƳ md)YzPs s%}&َJ5,b!,<,*m~]qOxpW_vyY_xz;>Ϳs椫2w{߽<һ)i\oB5B`1$T&As4fNYψ{U5VU>%.#S =5 (ؤ1 1E g3:WꪙC 2DD ˉ ]6km^DxAJNCD )|2De?٤_PzYBhC]ZE=1ˆms>fz k2 y'Ҩ:.+1Rsbc"(QJAd-#FX+)9r*$cGcW),ce~w~b\ɠa:e9gRׯ@oOܥWPjôgٱ]̯:iQpzn;~^R`<{׿вVܖ"?jHUÃ`-瑕yT&fHAjf0ʘFJj>K8U2FfjCZ+cl=xZGd4xK]~|3}7Gx6o''IzfMwli5KǷ|~6}#('5x;Gk+Z#gC6 -i(( Q*a ؃Q;d՗OJYI nJϟF&M͙0Wr **R>',Z:e3\KIg"`xy(X̠#4)3XK+/@HB+/|fDlW2a>,>WExU&L%waerӓ9ifYV^"Kg䃓/{'L|o;c)pCѩMG#mNNV0mfm)\0:3ox}vNz)[0$;,o}BJ+kEЯU1W׊,kUTzkkťV%V4o~E'jC2k 4;SM>cRa.XHwUkX.z=v=?Wl l} C Z1- t,^^4V)s5gQU`]*'Y&4\TpM $;u=tWO||"MTD֖iUOJL9s%3wkYUk=f- [7wsefeB2cf99~,mn~z/DVVz=-~#KYvS剠7BdHDuV9fXa}E-Su,'2̜jyNQAu9rrn`W鈲I? '4\YOuwR^"R n"&tښ\XV,(מ~]>8B Q!J*؊uuۢ^X>ymihΗ6u;{}*|bk1:H'&$:Qj@f'P N`CB%!j .z5x3rw>r:E #y #cK~)SMjOXA`plaB| -ǦZcR {R KFyuKܻ@+EΝ @2 ̓R\(,+L6euYK"}:Ykc!_msN;}D%? l#{7s5&k>$|! :\^3|qSlJ-K۠NͬRҥ3Y@X&;n}{0|oq|M$@ DAŔT3 Z$3e,wYbhg~yfn+dz]f䏗y7vxթpӥ{T8Im)"%ӮMQ`j% rSN,0݀AW`%jODPk ^ Y̔x \9Urd'ÀӓVKGK3͌|=Tӳr͌;/XDG^[32fJ@DXTRs Q#)Yp)eSLxj酞y7Q )F.M=*49q;<22؜I2QhA:, Sk[9ٽ"=FWY  iPzI:$i80t1j㼒Dբ{QзzZq(dhQDj+@:o0_%^4i;yRPZ[:n)p pLS.)"޲Lb8x_IGKI'F9 ~mi% EN#Ҹlѕyeݺgms+it>o3efe‘Ͽ۴#F[}UDuDL_w{CN+3~2=ҒKvRwLr;J>{1?T9Y"oyca툠0i=璆~?ʿ)< )YuS]'^682^kt5Z=&3p՘I1|7dv}U 2,۴'Q/#&s^>u;"YUnQYl~>Ok_u _maaZ˗,:NIx+̓=hL:;=7_ܯg>ޘ@#``O͈ |_~Qx䷎>d4$k2{<:o`m ?m u7xrn^﹂pfu :q\z,b]].4),[i;SᖧX3~cK}J?gXo51OfTSS#ސ4*whM_9>1!tn障,aiN/ɶ–iդwN~7I߮b.׌A}HW8K9oNT4ldrxti|6< ooOGjOazQ`{Kϡoϧ>]H=,tW;0*jGWCܙ>\bprݥ&`ޭQiRˣ-Dʷ ^Oh@&+cG?']3}A3W`yߛMi'P#7f66\EݼzW&-׼{wm$WۑUwrXp@6¢%,IyW仧z%M=6l˜fOMO=~U]]rOfö}] |aX5aG%LUdBJ@N!A,ONw[rD!V {l}P ULHc]BU`كs!kA;9PZ uF^Adbr"U4W j P("9#hBj.E×+E}~c.T6Լ^Y[j_uvPt ttE+I  je'-~wm!6,1IbBFY`dM,#[a Q&[8:Qʜ'ʜe̜,4>L3"%dg襴B%EtVX]ېS*_0NٌuɵNnQ ~_g,rϯot%,:eE[! p!), ݉5qz}CX:k ̫aty:DlF!|*PH4P ,5(*Wo6zÁ߄緓8pkC "HvB$aH&œ$W!hl2# ԭ  ߡR|-mݝB QFGZM&ZObH@UU!B.ڃ~wǦz\[QJ5|y 'm h/K3_Lj.l2P}59^/s|+j] v&Ot:CvTM}mOoNM.*',96/9Ǐ'BhVl,{Ezm[՜]}hzX;?63eZuW[Ux/b4-Keޣ)x[ h~M1/];(eݸ?sSt+#A>iLEtv&jIۙbC6C_^œ?d͉ٛvj؍R ur-N.ZxBy '|_p5,1HB@Y G!?N.+ܐny!+Vt盗)Zʇ5=~6˞V )`[fD#(Uu˗:T/}iY2җثWtv{H;|c#yxY-{4oߧΟ翎ޱa[>Vv:jQn|Ep,&   ^g<rsC۪eOxf̽Q`NiwN~XڋE7'4H ۄ gjH#0[P17zVtO}Mb!yf, KL(,D(׮0NFu[(@(LHOķ ?ŀS4~BgHABǠ3rh@!;{Pמvu ~`;5qVE86``~NX۵2:z#t{yt4^T7p&0ˉ}$JޒOcH&PP(wpQ3z$ׂDڐ^ a(NƨHH**_ ⢏ւ9?ZDҶhҖFIkCUXj5S16*ez9tݥ3rDP ёa؝WMCukV((VǍ}^~F|+M7_&ϥovy|+ |n {jW27˧s|פNٞM K N@nmA1ŵٻ|#i5N(E䭟\_({1 yd#P4!5:F׆/gr W|?.{0jYmt^<Ū(ssbm G?҇\'2r!&k_g@zHgxYp:k/dTDaշX!ä5*L=rWܳ&8mKb:Md1m85מ0}:Mk0ovtlkUI,#^QYN%;teP"6knm2?Tmw nxtutz^I7cw5 yd#>hOO>`歑rvϷ?D{̇<+NީWx`ZӃsuÚ_=ӮƼf$&"ODڡ maHDڌqJ(grV:J{S*)g_aʙUJom>(Xr:w6~K=yf/LgV =_oGb94j=ʫFwHgvH~DfF홇~n4jkM(=?$%QJ"r-Ȫ(Vs"## ~uBj?}kɲZb4WnSv4l7-\NW4Ԟ]̷pڿ߹eҿ׫W2b8U*{z4|rzʿj,k.FdT??;.Sơa(Iv`o`Q \8Tj7.<7T+:oy5R!ɜj;<wְRW? \9ܾuxSx(p֢c brW "Wk;\N WdP!Ů \Us_46,׼Jہ+z⣧Ů&u*kԟ-y>O?][u^IS;ȈYPLUr;XPHb"]AT Fu|6=_3n֮m^6Vv W;HvP+`aJ#$dxʦa/oR QRJ[X;V5w]ܺN+ͭ)d_OG?N, fyn嫄t{Ŧ1O0ݝE9{%\CJivyeZ(_Ylg~%ZE h Z@X^'ur{^'ur{ߴ@[nZ؂ Hz^>'{^x'{^x'{ɮ,wUR4dWd/< O“d/< O“d/< O“d/< O“=Hw?6O2w>|>YJR@744y8~1pҌ-'xp;ٲb%)E @<{`57!K*H>"c*^{cpK =UZa7ًZ=\#*PU+L'u:lU^eeA*`Nkp ‘d䋉B16WQ A]hD6;j3swlʹ4ݎ;=?[sЭ#ɍHxr~܊q夒Gf1.10}$M6.:[dQ`KSdJJ%$mR15!f[焥w9YiҀ/ R9#c; ]P: o~{W7 ~2}8bKU8.љJ%Ya%  z)(E]*|RB)JHEF%";J[8b4I̱vQ3؃Cб (kI\hxTh`q'$鶻/V %:;ƚE,&~)(~TLL0;zE3sÅ]䳳+0]Q5FD#yƱf5<'"9:R4!@D%L«T-C$XgZRN@3!m2JZI*Dhy2siLfdW\Ƹ;.v\|hģ7Df@ 4b:ǎQJ !0Kt5J}XMc+0lԣ6f3AR"֔n?-KЯf86I#ɬtS.JW:_ gh;"W?=eUMn;GPnI+uy5ִA=5zm]'M]Tڀv{T7g~ T.|+ dOLT9Tj-n̕].dgYz1E"kC]$H/R-$'KR. 鲰tP)E$!lEmhVx`bVŃQ5^nfuijCSo]ׁrԧe5'vu]L$ o5kΛǹy׵{uQ5D>ӖlqF%3Xd6K:eE!Mwނ,6\#+\ ɡGu {Coy%}HRBȘ1c>HZ`aOv੧ d}IQ.nFށt EQGTŢmgZĴTFig|ɲVKr>P">ceJ7aQL(S 2vX݁u-U!Qq֢̫:P٧KeDM$BK\ ׺;ͺ mqÍ{5Hz|q|,nS'sZRx5I"JxF6V6+tv1nrlÉ,?̓JBFCuRSRABC $(Our}'Zg'Tg'[ڧi*9YGArHE2\eUQY I ڶ\KWFM$OYnrvfZ篛ս9FAM^.̱FbNSb"i=笤| 'B Z&{*XT+ֵ0>;z֒P?yu2a#u>`oYᶧj{P}E_ˉ?e6żRel<ڬus eq(F) %R})UZRU7HnJGD~^%{y齩:X1تqWc~r.b燧b϶*8N!h^} Z D O&} 9Ѧ@cj1G#ASO5"]¼!+,40i>Zq_@%0@0x'`@X^=)ǽ&pleolOJa'[/%A h#JS`%{ô(JIJDSDĤ6kc$1GT@WA"ƢBpW9.cy1B7zDQHI~TA|, 57?_G1az5N1_bj Z<*_.MPKll;Czh'{ bds'ۦ 8};`k45SimbLy0Lv`񜴠"J2̸ [tLN.s{x{v9qSSzT82h۠*AKD0_R22J('%$\GꈰDcJY /X.̣?:<D${pH".լxzliH#ِVGkŜ|Nf6콮x4 auC}aDIX(V`)D22V)³tAxfgRӑ! N%Zd+Y2ݑQP4BX2 ?ʙyZm 0㭚7+#{ DSن6Z.G*.AІ Nj!-u 8ͼ~-G gI-6SG PcFmEZ+oad#īt|`|ɛdM&i&btp2e)FpUEa1)V) /niH]8'n v ͞-h"9ڢG+<)+跣>tΠ] 8ei4ߎ~Y_d}3eUN̿|m#+(}6`nW E2|QR߯g@@DJ0ZM4fbz1rf|* a%_k~xifo)YX UX 3j߽"4hTЀߡkz;3ˍq)ҩA7RW/[XCSh|/-tf1|sV7`6k;+{& CiGҏ ٻI7@cuI7,*6QDZ=΂u~9-WB"7o3J}jZz=0*8y O7w&T*:1 @Fj',.@lW]M:,?-o|r D]R*ڭK d_y8䷊e3 sU['kE=(^NS(,Y pָm!yHuE'_FD?p]|5`$[ί |_o [-^Ym94@ހɇTlP 1 wf ޽2|gc)`Wx#9aPݼy}2&x碢a.YiBfnVhn&:0|^ˠ7woF+̛Ho*!ߧ^=Z:噰V8.ѰPcwO!oj!o@>̹p̷hپkn@@n,TY@Қ*$6Fu$"F0:D>_ r 56:PA#NK8Q*ҠvƔ߉hjfj؟:i> KfN-8<܃TA05{E `DeudLG/㠎;?%<~ִ(a hXEcskD tIqrRr)A֟Gh[C;D/|Ȯ=ON;-d %UE*x(+imA/F sƣ1J7] :z]ѡzFERΌOi6&nKƞdC"E `' 6Z nQ_JqVɢCxQU}^:q[%6 Opcq!5k+O~s{Pҧ7+^ a ʗ YD2xh<^N$BO("sFSn4/FF)[ S8ZI4{ /X.%6!`BL1jvQRk vRʳ}ӱs#MiGH~q7Ӌ-rEA~].8tPG+_6rbc ;iE\sc sD0:o:\(xeuVz:>tDSIC)TD2'Tfi%1 (OL< PjTS0|.0=&1!P0U[NU*~9_ Ea!!sT`[򴙮r Ò9m^yg=Xħhʇ 䭄Λ\ocYpH%T/\KqyV^$/h"Er1$wqTݬ6_͖Uϊ.4?  X0_u5 :욨Wp2& $ Gz~QJGlj+ĜŸja# 2c&-]q"#K*b S Z8\J <@ Ky'oWGc14=F@Z-e)F 8Me;Q"GÙ1VU ;-Q1䄥 ^*mR^$UCFyRTmTgF·y`\&(Z|B@H v(iY`3`Kn>e`I`ysu y0[ F+A)SԘ#d47lA²׏ތ%,`8zcCs}"5C.z{{L`דR=޾Prev%G)i8/F^ڠLiL )&R.VHl~qYU}XWQh-o7JƐAR 7}1W٣xȽLA6vvt]0[ > tx&(CP%DHKM6xgo|K1uenfQ]K4' 'Hv_ 5 DO њ)6DhJad]Anɍ^.|Bȸr;`Y!`*&+d"W?JVHgz8+W ^]UA @W_iD;A{NԚ"\k"~T_]!Ue4]!Ob>Z}|Ng&P<8?G˿wK-LgFŝ{І޼@gzͱ㧇++>Qnr_Upd AK>y&y edtjҦHIbKe[B>' ؛y!Y*9Mb^IfNѰ7 _t\HW/WPl\:R:A\ևp+vx{~Dow|a@fI[ɯN^9^b b3o8P&7Ce47F{9=wi"as^*W,AFtߠ*voĉf• *JyuU$q)ĉpB2 TJşRq]9E\%ky*w`R.͂+Uq*v[W_WbLXx3&$Dv^G]H'y>˞ZYjL4|/5?5&Riϲ ?N /,,GZGv}?29G)TY 9UV*Ik!wghT4RGj_:2:A\3-Cu,\OUﮠҙuq87Tp2*~_; T[ERd * A4RaTmZ|3M++v\Z+UW\"bDRM+d\c=s/v+W*YpjԾJO)J,?=}s$!sv>y>}殓JUqB4S-"_ȳ jɘJUYqunS$Q{>oЎ(c/./ ѴMmᇇ~G-߯Ѵm;J#G{w>1tv!w}˿_Uͷ\Qؾ(V; Η۫k]m~{״!βŽsmǧg|rt6G}~קӔMyz?}n{CԞr}k/ŏ#uټ}0>Uxʶ|^X.yi~A옇 eQ7o>ϟ#wͷ)̏B_0onj|_oMnPo_vUo~7f?zGY7ZGײP|I)T 3J1~cOws?_~}dtC|CAuuo?[~× \k׏r9{;2Uc 8Jn,ζP6$Y$Epc+!ْ SδT )WLnycJȥ4[Fח~9Ǝ&2업8a$MZ+^ {hsftݩS$c#C 6X"FɕV1;Y40ȍFy\NR|(T.%rz;X]4Hm:"!I\8I'ݘ$B!sψ$4/ƁCOW8:&\MiԒBMfNlMh2vXc4 h6dyƱu&fK c0'$cY%.1b59xK5~kb1J!yD!@^!^7$~/s6s%[šz u*JR8Uh*ܽON R0 "ΙTR35.E3FBS/R<$k"ݜ|ODL;α'9fqu$~b滌]jJK!!_* gs/*9u+)Lb1ȓ;h^Vٴ#U![3&d]4pJ37gbPGNpО%C"XdGhϝP:K+4әp4Z%KlHyH r3B,\Tpt-<4Gɸp0p&pyU7J|Xd./`ǖژ[øk%βеЅ1- YZB0\I; HJYl(]1\kw˽AqU.Cm)CV|C3#\-"] W(c"ȆeBGk;i}đLb+(FBťEn(pNZ25XQc6#K.@FwXGzjpB] aBaPw^KC;1mg0Cu0ȂȄ4ᙥZS X-[K)s`&SםA:TjMPI!0΄#72D[q/l 34ÿWJo07XQ;G\`#)#=OA"JP3u qiN+_C0q'9D5}PSRE><<8Qf^$lgXH/>8Bj" %a >'@Iu{0H6C@U+cG=D &623z0׺~yA 4tlpBrr|Z1f rTvND($_&#v'lۙxaS/?.kn e~X n]pUz nԻ $!͈ L{ vs a:d.q t56bvZmZU&1>#'b%⃊E ]3ʃJ$JEV2q,{GhKrQ  );n]ՎG⭰8$U]8pt5ڞwb: l;V+.$c~zﯯ./4"d&KR+`+|mi,#z,%\n{q:EjC$*jPK_!80;2PR"hc2/ExXKR`nk 9ej6FcCX DA W V(V]bYM[LrP( @0be@R5^Q(DtQ>X "XA!32FJ)2~؃`@Q؋mzƛRAhSp{wd-! ;,Am,FphYtf";++B?)a:tg_{2kz' @'KN t@B': N t@B': N t@B': N t@B': N t@B': N t@B' 9)0ROwKiu^6뛄o@CJqȻx~ Dv1.`lgKNoq؍Kh\:ݦ^}SŸ:n\O5!\M賃I=,j²9|@x ?RPj&|FVVF,U*jӇכUQ]QRw̍|Lc8AYL9h(\㻵Z(v-_}2 (s>{V,}š<2 )|faCd\-좚O/GYb|^-XʘHhHrBW>8972-$F57`) ٨% W%QRT-jd66jٲm&=bY-+–meW,s%YJTJb+/\L:gEʒ["9_!l+Drd43Kgg::CW]+@ ױUA ҕp!UIW誠骠b];`k; Z 9m ]'68#C0`:|`p?j/fc(JAWЦiN;DW՝+k ]B;]J"] ]1+V&:CW]Vc ]"]qf]t) ]/}骠 Jp^ZJWXs*p ];]JC ҕ`cuа3ˆP[AGJRi MB;MJ M"Msm|aʡraUu+5^[7:9uZK3'*a=L\L%css*O}ܡ5?Gy.ʔv(~lg Z}Ete7\*mw3tUвO,(C:ABZեH+ԋ; Zv&_骠HWOlzyFp6o\E8r?⍞5JudErHW6}}ΰ]`Hgն+th9NW%HW'HWLJ)DWX*p ]oYJőN4V. ]/|*(NU,Ugմ+thUA ҕdJ#> e!AʕldjsC9> X ?fS %)G~^]#Wx%fBn"%4NBmlNlkEho;Ttp,ڋ4wP[_{5g=*`Lʸr-9j%K7P )9V|so}U/|?u0N;[|2hW.  p '^YB%x?ThہnBw}},Ζ>L꽽_K=j  oe[g\u;ezo0VAwE UXmLG}2P6+66;9~Zؑ:%o>˙"QIVgYJ{n٨)KZS z{4YknѻT7O`! SY>U-vfz:s7_X =Yğ߰AI2].Q  9wTqS(hxq@•5,5&W[_;?O5º^ЭN\MF.7Lof6Ⱦ n-{0zo^-`qݸnܯ1q73Nz2HS{s(/wU9}Pgӛp~m*nzS1Un)n'%o/O(po7E'Kn/zMYO,zcE{O .4-Zrwѥ/xBs{CWWpu]ruvgfwQn w.ʪO/{ŏެd?+.c9ϓx!?o ϦG n]XEhΆ_ok8s~t+xe/Ɱ҇ _`ۋk<{0MYH+ a9옝r̩8~ijr~yZvF9LQU4e_7s":[Ow'//{I߭?[F\uzq؍r7 p)zLpy@|M|2ҫ+q\*;07g/,nUhRʲA=(|>AA>#/X-x*T?R2+G@cݠM}| l/WtxkuMZmϸ g;GL+meyt(eKѝWuS{J-,Lbʄ&lrW%./eFUn4y QK'U3W_yYE2U ^X,زYwVg-?f[ > bFr"\8YYeZ+;7kQ;:- .*ǃJȭ/uvʠș8('5s&̱Lt^(g ,+DIE a$˲"%*I0iU!2ꬿ8.y^Ύ3m)~ ,&TPyE\ώ/bO9rgRoEͬ]iYp~޻דb?})c7+qm˭XS_"OoB6' 94ᣱĂu4s`SFEL1VzcbDNo4p%baUlς Ś}&cyٷe~ FogOMy>{wme<:ޑ[oKͽ϶zy;rIBП;*3roپPuԮP\˵Ugzdi~cԓRTo$¾(VӿϷ RgW(Sx9si "R9RJVƂzj „UJXD.0ZNr.S&546.*#\##,E*(t$X )R8NPXh Yqe*C-?;2UXq-2V|'T6c_GT:L8/^YYu.C1;۲s[~>"EIT/u*m9AFR9f"0&<2w,1%pyV()|6$ %-?X! O݅Kns%^kbB58yZ[ @ǕȢ3bt"7\A,Y/#AAv<5 ~ʊY#hHc6]ux]uJ gE\rhkWdj5FhYMiX*l+V\VY J U@72v؏tn-X>߄+"`'oX<@ݢeZ'~|4?Xqar.,'Gc >V|^o,/wц[VBeXt=ReԩHOSQ.ҩRv*jSة2kDY|^ _nixg@Lk0Y vȖ8-u[rح=H̞e#\m&V)#֡JHRaB5='QZb_un> ̺M5 KaRIng쳤3S(Yܽ-%ZV@Ep&'I{aMo ۮS?$\ 2%}یcQXHVbO@"H]}\Xs*f0O&; lŝCXm8\RIA&O@-JKAk WO*[\g1gg}*&烐M% 9nb.6qqLkHJudWd`ͮP.YfŬ\uqrkZư|c;11N-F+_€q0\jN7E3_ӌvlZŧaR6{=^T J`bf b,h+EΘImDMuWJۊ/BeG0y^a|,ym9qm$3B.OM^|Gl^;\|:Pv2 ϟ¥qg]U }iziµnNz$eΘ8r1/,kǢmZK Lј a?XbZ(S&+{1hŧEE FSrjQBYF^7Sok#%(jABjjr˺VXiڀ|J?ѿ.- -P=w73#fXܱ<3-=œS􄤲ۀ~2R2\He;v6)Tw#;Kv}@MrU@R{$@kA.b0Dk0JO"_*rmc }˄f;P |X<(*pQ &V$L;j5hxb62cy8&;3k@8/w!qTؖR1lGbo]xw?qoŕvGNS0Hh;;ʇ˗&rC@rN`S 8Rj#nQڑR[VwlG9#[?;xlL ĈB }`t> }4 5jHT!q]!)H!%C%mV([MP<3ctv޲b݆s?; R C[k$աRwbg&(PL}+q3:dXsU2rPу842tߠAvK=@9rޕNޔ8ny8T2E`(+R UIQ'rߦs*ok%6^/^p/i+xRV׳NGG-l3h.^Mvdz,,5Ra[Nk~Ήޏo~ZٛH~?O{/!VZ{r ==ƙ2«V'9!LAnOLHb[;_:A7o5CN xv-ҸP*2W'_NW+yl]^_V?K0vǨKsvcekkjxNfk+߿t&xy[5*}Hk#ڭZCERx$CJ K9D}E/ʀ.CK ۏ==lw`x˦$$mxnmS!ΎԈOs2 ,g+58Or=9U\7.!)18,XIFZG"QZ2GN|yY}^}HcÃl_"vCX:}v9py9oznG/Cڍ~}$u6ܠQ='cÜw}67,̄S}>qłWb黐si) ɴ6ΝϜ`A {**2l*GQ{͆k]!b%*8](1x,/G/'Ӌ[Y^'E\RvцSȦ4͠tJY; ˎ+)2Vufrrz4nZrW+8Vyj kJzTH+@?X>SIauCCpJDncZ%Gj.F 5 {Ge82l=:BB(YFU4VTH̚:aζ7_eW ,>T `$^nqcŽ ZŲ$h$|TQ.jԭg=@ugZG3LGZOֺrM)v::GS ձmi_޸H"ڮ/\bWC/¶jͣ|,tTh#Um@w0uǯF׍kUQb336o/>b!woA Ͽkl@me;$B 1zpbߴ$ ǾCIX:;=)L]jBۡ:NuUUu5x՝rT(){ײ8Dߠ;-07O^wɷWR9-ONm~i^1zfgmSYoԚ&j?/A_nUJٸ_zkm>}jutnE,gIu:j.f\7,6WaWKEg0@7vY[B#Odٍ4(P44jӏs׶E̎ZWR"_ШrNNnIvӕ '6鉵ov󪕌}~x$=ꇊ};m._W+xצ-a-}-YK'۵&~-Ͼ6]]_& MW'=۞sν.r'ئ;>oy`^'Mou֐׃o<-g_f1uxyg&}gW0yOHwxX5m;$T;U#lT&DVI.m:61՝8o= :7x<P6ʣij[M}=͊kbQS؜Ti"ԡOMF0{L_xz|aqr9/?8^=WTw.V5%`e-:6LA]0ɫ7_, /^nA C#SUOB~Ky߻?.kUlؖrzӯ{r/'g/6E2l/Ozg?y3E^odiԮ4bƽqgc;˗/MO@Dlp%rL+q\ZU W =/*~t\ WWzEjJi/zhkRFm,Wqp ʠrb0\\s:DZp5\JpłP+Q:u\J ʒF2 &dPl& J &+grv{7Eo6]y^~ 6wxf;Sn``/~/gW뷶4Oq6qsxj԰p{(O?t뫓u|gꬭ4]pYG~~?KV6%gsua5hA [js"8S{^VN..oN ٘wj'i ̯?ꅪMe疢Pwsl#;g ] ]> 4#7\rD 2a~'o2 :-z &\ZK*=/Qk(u\JWU2 DR{^uYD"C9@`p!\ܱ㵇:Xe~\iϋ{g^l"D-&E*WW MO q5H0\\Jq%*S[j/z\)ٌpłsJR6ucF2sWς+@IA^e\cr>u\J_&SĕEo2 ~ͫڱS>D^тL@ Fϕ)4a fHۻ kީ˳ /T6nF^y0hsc ʰΠF6K#RcPZUj`eYxg|N1,e68RljJ*zSy2płQ r +Qk1u\J &@^)Dǻͦ:)Q Tp5A\!J `t6r!u\JWFݯZkW(8(MU5L-q) T iExu\[pʩ׮ cg(a.U:U'+PJRK"8M wtlzդv?}vwd:IԀVg7` 1Dq %%*a~C0<+l\>{^"g,j_aS +4he+ :JWօq%*- T09-"`gϋƻ\ZgRǕPp5A\9PR.ς3KF}ⵇw?v7QJK8`噯O jS dXeX#3)2ea/l>Z8RF39g2.g&'jӟɉJ_8(+D!\Ͷ oJ,"MX\`J: DmH>ƓU &+tlM9) w%jOW2m)ipł3=D\p%jӟ JWMo$8arqd\ Rf$\ SiK`\=ww5H0|p%r+q%*S+Rp,2\Vbޅlp%rC6 6y\J_) JW;Y0JTv>YpeCd+d+벙 ڧ^i{uw|/]n5t5>0;] rYҕc`EtaE2S'ah=vʨ#]Hڬi2Wfyuj r9Un:~ՀWS wVV7CWCo^R<ZL*>:.ahGv@WFC[]~ruu\Ok6c+4Ƕv%tЕQh5t5FZVjd't Q ^] !xBW-cC]YBiWDW|#apC돾Js+a}tϓ"w,Yop9{!wx':1n-:]b\ñAS-֠#a#Ia|AVFyσ;(;_tU}^Zi^~RmC̅%J,-6gSldƖTP6]'?_1~1%/.1'w魿ޜ2]SO_7OX]!즰о~i(:6🷱77];MuMm+IMYʹ/By%oC!x|9Zl׳9," 7 "3\Dp:Е (]}&QL(`J&:9>hb0]ڕ77Bׯobv:osx"tA15"k>x<.~!@Oo#oKڥ|nQ޽((v6޽A.jDDS͊Q5Z]*g3Wr0:Tdק m`S|m/~n&=yr'֜O#5}z6?~C_ɪO܉De7GGy"Ee~D |D|k~y8{ }8ޮ\Aj>ږwRm{Wo4ǖ4J2F`jTu&#;rlg \doǕ]O $ >M}Vߤ~͏ ^n.j'R[7{cFW첥aqX:}guP:Q5fOBJ2NiO9W]Jܽ.^cGaqْzg?.4s7ʾuU*b!clҩL>V ڜgr t`$ZZj  9PlbLF3jLzNѤhYY`b4|h ۷/ bwkK8߭"R2Y3ֶmm#NM#$B Sm Tg11]kGfhnf(:&.\T rz7^D4}4G@R+y݁6lƏ{ʰ6&]J'M9w Fa?@.\ʚT{}bfgkZ1nrDD=@#6ȏ${wJeU!Gd3<%cNօ9 ~'9Λh̪B^Jj[K9ՐJJVĠs )'o}>)%Dk(5)8n%qu$~Zc曄E9>%* X )$ZjC_2"$1ڴCԨ*ς|"GfѼXL)&>*C֪+הOQਧn|fnJ}Q\` :`ў%E]#hdGhOPw_Kԑ#1i PAHm&CijV2TS j,;p6Lf ,MJ+K;.SefH57]\?,1u6 DaHK5@ m:wlV2|댺5gAx`1g#n nQ!6Ks ` 7đPI!0,'T jũ#3A@7V!joLEwf%RI9n,E{yGD)J6􅲎oQzmT]T eAT݋.RS+եgd0[Qz˙JDA e`d."2DbQ{4{xWP>V|p.#hҼA0(crBwvߋqcŬsDr2|bL±yUt"B!NqlRSfC< ؾ3¦ڵi&;>S-WݸZ015Lu@6=Dfxs:p<(6B/БuJ`"gIW=$B+uT2: !'y֣EΈ A9AJ$rAVW2pȚ[CL-cx L}辬$kdL5et<o 7-JrA#kP[DbYX{¶0VA;>VGdvkż ȓd}ȅBOeL3fXbM=A#%DF$rm !/sP6DRD :حݳ鶣D(ʠv5-%Ǝ`*ѧ!)I+y rN1^!B ;h .mv)'{t-٢ Ac8JA"cQ:fWcy۳:LMLP] C VA;5\ʰ2unjpF̪+cS9Qb5\,ԚϽdžqվ =lC$Q5j@eV-JP)ʝVU*›> 22ߢn$Dmy F `jU;R@J;H U߲Yov+P!>^/w`E^H"6J1N:iHf 9)E bX0j);Zƈi87F1 n糞w:H%-й ᧀG:ʺT8FՂɨ19MC)YRw -jETAljP%nIXCLPe3BBvig!eנB]ǒ?j7l5^ e8ڢ* `*a2¦I3 _B\/čt4[JFDZ5SQzB%a9#t!; @.* zul)ʝDL0r( ;fAjrK BE!&SJ,tR@Ւ0[i<cWP6u<D;y0B.87&7[F?^^ܼN{bt0FmM-9O7~o}Dq|Zbǚ dpeFS1q)z?8ٯ|גzɶ {a:?;&@<g]v_2N~{9NOe{ÙvN_"_*jǶH]3ίC<~ _w_nN_m7mw!S[/v'ƯmI=tW6SWx쪜@zh v5N wtN 4V@ 伾='8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qv@crŜ@j@~,Sg=O''8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q;Ÿ598j@@k;-'3tye-qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 ̷"'L7ڵ8V9v'@8HX@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nbk=^c'/~[MG _v}T5غכ={%C掽JKagCBH_YKθuWWUЗ p@q>90U"e S|Qr8m0$y0ҕ';,7Xѕb]WJ\u1JԐJpEVt%\tbtE[V=M9uULCwpyS,h@l(cKWT+jzl!] pvь 6 LkוRbj<ِtKftɌk+Dt5B]!e!])p]nIVtĵJ){]֦3L=81ѕ3RڌJ(3jbnڶ6i3@3ŐWMݢf'm]fٛ`9>d9,7S]x"r* rOfR1MNO:]&ñ4W2[]-λL}҅_ wK1֜\=n=d^er{¾Futxc*H_}u~F(]ZRf|zx3SH8Fo&nP A)}1n#!] p2+e3O=~yQW18[ѕ']).])P (SRdHWѕB+73t5]r2Ί"ŵ3祴jוPBmwvoY])n2Zw\uE ?8RܘJi^RDɮuǨ+Fj@!0ЃKJʝ3v^rmay4{srz~qʰyx۝K.Vk>HurN_m7[Q22,nu+ e#SHx55.z"mrOWw@?uU%Ĺ{|Vo }\fQ[rŽ<󧉡b^rly^.=Y^ʇv7^-/Cq'tz뽊q Qk;}r7RiA|U/|(!Mu M˚ۇM2WS6&NߎN;WwNN xj0pɻW)JD `bw_FsuҖШdfT@q2* U?޷Q xx?. ~SySl>[h~ˏwnӘ{^:?{圓wyWea}Y7cISgz [e~M%G B*Mi3{ĘRqHus?Т ?K,A36n%RZksFQEy/_O^W_x&Vb^Ztck|`\!lxqg ^`Fr.3_75a# =(r{ ԗ'lQN))%B"~a@3/f@*ziZ%(Trnה @ޥd_Ö`x3Z2Urtv𢓒-x/ rvn>bDb3`ϫ(%ů;y5CJ]T9T^ɳ$Qћ/cŬg旇#J G^Nlr$ DS,o?j)/Fy9m&J |OJPf=$iNS`ytH.71#%njB3ᘑt - q 0C6+%3(mu^–UQ>j0cXWe_jOVgGcg%^v#ڦnrsps?d.-W[i^z|@ !%T{ܠ "1n3%C9"3Rܡw/kוRbQW#$2+ю7] unz]erfC>`gYqUD PRb[rt8uc tU^Yae4МWemyTcbc!] 0c6+ Ί6u%j<.T@8}D$~qpqz6S}zAnʦzO?\Oy?prˇJ;::Zz΅Lxͯ{zѧr[NKMĹUz;6In7'r1y#@>tbYA>W:[-w)mL ]qJ:{wt^Urإ\\HK+L.FZeǏ{2fw'VjT [{Crdf~ьwdU#+e^( кݜY*|A~WiZ;Z-?vojAnu8hCgu]γro PONy3=9fb=9'7ž\33--`pv׎6V?𤔹QW9UdwGVt%JjJ,QJpVtLJ)#7]PWc: QlfGfWm*mYi4|~`M5pq`-2X٢Ttت@ ѐgy*]e])uHMW#ԕ{Kѕ p+Pj<]C`w豫"Z]RJMW#Ev^ .ѕ])-umjbd iß|:aғ̐CȎWROi%H4:?UAdK LܠCI^B~K)ʌ1n 2[DI Zѕ҆uF$H4"YQT RrR!] pv "+m^WJYgUΎ*Yؕb+uu_U.uUC+Í.MRF+ t[#r2+&3R\Ji9Ԯ+MWc:l-NvtLt%4B2JF+[9Kѕ'2+eJi=񲻮2Ʀ]tt%])g+RZJJ){"F+KV_>(:KrbKJʝ y3y 泙DYer|Ry˭APom 2jSeYÊ*Cf&o.9o'ћ)n&+=9Ce<(֓cO.9giZOѕ&gEWBd]J)=4]PW C0+v+ōъ6WS(ku!#Y jFftC[FK/R6rtnG7u{|M!pxg.aatUHIu-IG.?t!] 0P4+ ъ6cJj>9 J> 7] -8]WJ jB>T!p])h \RJF+BzӕdFW])W+dF+ƀNh3;^bZ[uT`Q譼h6U?Ii^4?zӤ|ROr @Cof4Q@V>J(eno1vs"a"K+8ٙR܌Vt%yߛvוRBE$KLN^q bѕRr2;O9=t6T?E z:)"H!e]3'eie{cU+(4]=C `HWɛѕ`EWJ;YEAh|!] phFWYѕ2 CD t%؊v])e/akxtEѱ])Jqs+ jוR27]PW,d,A h>"NCȁ^RR]a¤w fYtTן,slkdu( uVf)eOղ;%o 0i=E3=9 T{ON)Sɍ'b` J? w|RebRJĦ*ƜϘ{3JW'WJMW#UJL-I` 6+ōfM)m~IPsʉ ,J4LgPiESJϗ+eKy߯237[C*6]V=dҕknVtծ+LF+)g2+ٱ3 ])PGQt5B]a!])Π}h-C]WJ-عА8{0+zhmu5]QW&RY zUe:: g[n:ٔLtRc훱68:;3qb=nPJj2c 9DC.:ftJiJ6*3F]IҨC2ftT Rrl2V sd}Iu%vƮ7E+Z?9emjb[cWNgPqΠu%\MWϤ>VW0**Qr;f*lXլJEz U*zqD CC|v on_a3NJP&[GneIu0`EWBz])%cu#JOftVtRgښ6_Q)=#*=x%T%T:D$(^d[翟@p0c7ؼtt\}JQ+CB3ʒ;@cTnwdTK%+C.Lޯ\*h"{%b{TwWW?. ^ݭm MXYR$B$a"%2ƈFCdV㠴ET Y%革8] Z%n-ՊxޯGlnl,x6r {B6kYq> j᷿#*&Vyy`3 f4٫K X5/<׬Z9% 5`u^р71>}*V=u<0*C}vXɎhOtv}|70ߜ5wkQ5rߦjbEW)TWN^bD5ʭ $ mByeAo:ǂ!hKV-k~-#߈y7e B~ҹFSCoه%eʍmN4/_W&<[QÅ? ~|*hk;j8"˵1y*Xw*̒T iTPPv0BI82ߋQؽJK{—r48: ) FPPΕF4cXPw e%,-'-'iSr:L?%vhݭ?}=nu.l>/04c%&h 7KY+ œEѮToa~`vbȩ_WccͶu9w,߱Օ;7忓>.o5_]֜7_>\!xNR| 2[K\/ߤ\dbĸXI0#=L ƘF}V^d;4ݽ-VEz8&^Rxͫ@ڗ%4Cp- ) wŚb'XJK 1θT8WŖGU8O:pɒIfsi*KʩE7zrF@*c62c$Iz?= 6 TA#RB V:%@QYbFB jEz*I`ABqAz?JJSyhysv97Kd# M3 F2|]l&#,0Fj̛`#?/G 14"!N"FR7{·3GnD#vJ*;dgOpwNKUW":aT^D/]C+/mu_Ed.܉ًlb=S Y?C~}E%]WRp1HڼTign?{E5&btqYɞ˝Yq_ }V\R6q^ҧe#:k~Sg`|4of+bh 7}0vpާ?SCD@(ƨ2 H d"_s.$.lqv3Bl?U6<$ Vwu8 XD"`#+î@NZ3$R R%A`V8~-/0j׳Ș"[wqʬ~7hue(9j b.Li(QBs&&4d(BJR(,`:puY.[j.O]՚M>JcG±c2ٲsC8txP{])E;Yl qBE˓MF}:TXҝ[%-"Ջlw2oUP_\[yΝbq#Ծv(ZrPNv^ژw*لI9x̑۶;,#Xٌ#v2^rz5@ҏ$e%A0Jҕ3[ϻkUb2 V2K ĀDAy.sF`×/sj~)Ԡ{|YCh6Ҁcվ(PA_(d+8}B(/s eJ6?g`$}z.%-Vvǰ"GL,SXtKS'p|֒9, YKvgҬ;tU%P5&NWQꈫ*'3;Tᲂ?QJdJv3،Pge%;sRǼY+ R*!go:*7oF#IFJcWgWe,!`w41)4zds~SHO" (zg 2-.SqӅZh<̒3r'0+$EiyZGB]T_%[}B%Oxw'TEL%#gO$.Ѱnjd6ˮo\V-m9Dr&JEĔ- $L[R5` VP]ŋzP[J>י0 F1xhF5\L;SlJhc MX¡&jPBAX(H$54!1a((% c-5s[xR >uϵI27"ܹJ[`tfY7fyR$`vﮮ^*5fӟ2_t&+{Ѫ R `d]˽Z9% ïaCЉX(E3q|۴Z;Äִ́Baϛf\Q-972)D@em΂{ozu~9a֥yĻaw0KB 9fngTJa!g\߄Q GzlQFK\d3Z eIԸ 7gR$JbIV{ .DU JDgɋ1߼͘sRo_J*^Ds2!]`bh}{]r= fsVy&rf(ޯ9 J9͆IgЬ8v_nږMU$Aȼ{ D&=mOElz<2"D(=:/S=:˔B.sĐ.DZ6Kx(M̖p`ӓp*yfg|D1r.e ܍!҃1|NpvM\4져.WlZ,&q(80GGs )&=fJB^kHPɢsyɤhE>JJ&'C!j}lQqބ?gHb7/9(;%GI₁@2˹ 7Z2 1d!^^Xݔ&ym!s9(E]^NHUр% FYBӻJF1pX|5{RQ\ 0$`nVP| \S@0򄇚we7֍ؽrx*1pw;qVPcb~2;Ng&}mzL<?u0F"Wl5P$CAy^CB1?8Ud'z9b~1jp]W8yg8;HtKs9#4!_'.Dž[ Jzj+8V,`+k2Ug C ֺSofh39Q 8ԗ|]|*ԷV&WSpAZ}`>P󥾽wD!i(MTr*F8 Vby:m‹2N0Pz5gLAy/))M!7#\K)s7d;K& +:_6 VkhE+v=+Vyw %T8a0v2LT0`*+zy0ת=9wNP0y`J, c1 ުM1KGF>t2I9̐^Y#O:nWx}p&nK,w>~}:[ &ǏlZ錣F\jA %Q$SK$$c:G&ka8(m3~y}B&!ʭQky7%Wo㈄\  2" a55CS`` YN$߰BE(H!cCl(/>j+< +ȲsZՆj$*ԑNXbcܠ1s!`hjپQ}|9.0̃3띡]Av3&lPPFH?: 8(b2:ȟ׼%roݷmFah€afuhlUwZ{%zaiQkwmH4/0@.@<- r-_T$-%Sd)I K'n_U9hVc@v>rEd{]*f1~6vspN.Eeޥ,2 BWT92_Ϋ@,V92V҈'-kѢ)!N' c8ޫ#Z}@EhQ BwvjXj*AwR:;䩄GHOph& aZZ)ɴALfd8-XI1A6`1j!i H $J[wRw\'ʜc|±z$RGԊU,u'hܰ}jL SlAK83H(1ԩw:5U4h#KQ1qˬ A J )&]?~C&,V@9ZCKF?R jֿvx ')t4~6+T#k>?qQ/p 4Ș@y4\S47ƒB2W-A;sP=9_X6xr^Q@^ieL'=&WobnCqZ1U ׄ8'-pMƩȎO ]trBL]+xg!9-V2 Hp,:K9ϛw'&.2G"k Jhq-Ll3v|qpq L$4 Xf阆1a1XM_ hREQ($~mҩU)te,*LP5F8ƦlH~I)Xt05h!ӑ\\H]' iQ3İ"Qy(vv:}Tzё)-=- JH'8[? KBFO4R[4>@Q[uZ[]vL N8 Lq%#q~o'u~l߁|]fca&!S 3s#>2Y .E̙'IDL,\=fl1xq,8elTk4'\% 1[Kt_ͤcdV DJa,v\P]xTFvY^/­ cR"c=$mceCԞD&E:$ cYoTDSuŌ7{:Nkq&dGk#8yiDrd؊aaAD 4ҫ$P1k=3sd-Jvв.E5 ?gO萩p 5*m#Y]/)>4w106wfJdq(q-W!6Șd(PbɆ@,(ge!;J]b.AҒFQFQH9ŴqxI"Ixj$Ѧ.^MRzl1dLA)RE[`.SUѵO ^!V 2F\y%9!jF;o=MBX5NczZA)ZT.Xɠ3 ec!IR4Mg1ç45ǑDH >TJV[k7$t;ySII+Σs^-}{C"tֿߣM\B?oʖ1_=m|}?/&ώ-o7gحr^Q#(cفSNRwzgc\hVis-az&މⴣӖ-*^h8w):]WzԇAN|6"*UӦ3M!6@s^dͅ;٠3;@אS|8gyl>- ZSBJ>.LgïJPJ+){3^_sYFntpޅ1n}j>{\!e+ҳ;E%_V.UwJxlMB6ۜ (xu_*R$x zx[F c\ٚf;<z2ey*HvwW=gwl-l# Q8b0QF)vݬcsF~./mAwԴtqZs!Ɵv*nVO^ג6:8-]#CglZv`hTjJ Gb5{_a}^^|ey7d8jQA"S1i@!XuuOݩc܊!n0%0iz7f3)cdž(-dK3 DˌHGoOi_aQ t Z?%>΂*bf\vPP{ݱg f'U`»%R[嬠$Q3fY02E3կ<[Z2 WE bOA9v~ ? uRiwBcE!%˵F$sN&ƹT#Hi.ٺ A-jx]jioO25NB W*Y!݅1N9ߨD"gBd|wf{'eP=oJ@r)+&p5N uޙ}.:A}%*$T#e}oQ c095yp31IPFIZ2P)r֐j9JHm17X\2yIgIZdoy=Wejo2ѡx!A\ڦ Hcye]ҎIˏ1a-˧ 1㑘녘 RV 24惩K\rAk$~]4?Z},x}>"l\RAu3h464:Q^R~Q9۽pǁj!.'j@dbS'GnZo@LeGLЄ2L&oܴ1dզ e}6:We.:U As)qg#ҔrgšQ)4 !yKL'NȀd,kٶd9RPYE~+E%B?Ao!v 55<+w'|\`tq *:aڇ"a/uTC cEq2 "VKę >iN6b2Xe®ipZqXѱ:B/Z[dQ6f0EpfՉk`Z_8E;ƑsTnK;;6ȘfR` ͝k1r&(N)x2yT2ْUhQ B/ {Z*pJP'2/0hIk ݜXejxP!Z?Aϒ$;w&0#A ?.Z ʙme.Qmsp'|y4ͨkMwLDu)Y,a,S2S*=G`٨7H;WG4B9d};Obeçb(k^x|ھi c=zlQ;o=B["j Ȫi{6l$+BiSEQiG)j4(i\YdH I3snR uiG/)> +>c@qӺi8d`Ky캦yޣ0 #ǻ]I "_lZ dLp6{ȔʈZ f@yNu+ĭY}^Zj@d}̥0#jUD/И-o#dD2?i0ȴ-*Asq8s6XISn*3@iƑ&kĦi71Km|dR(jUV=C"&a7JAAD}$wJ޽0l1 0cwU$WwOF.XC jKr?Kr,_l˗lϓNj ߔXpX Hp,IQczY@520&g{a'pחO%8(;arz> <x u׆OˠcPT1鯾埆N'Ѹk3" #? :2vwRSsC̣R;O/5%Lkg_# n5$H^J )&]g0*3)JH8iჁO7 j3 Q2D_L_٨7tB8!̰o Wry'jD #̈@0/  ]8>]KpnqIvϼ@Uu=;( ~/8Yk~Se.e`iƤsHP~:zGk?{8ܿ1@`{఻8,& ٶl,jd|%٢,Rnl# 8̍gfYzvuUmqf.:~ Sտ^; {i='$8vuOV\HV4:Oh2(;y߭Hm) y݀sQj.Nʻb ƏYO+dskzy"py3&aeynaaU?}D2uYs*0keW#_A=;bצ3 Z"(%ǜCZ (Ϝ"0{+ Atnr>^ hF3{XgMa4ÑFZZIVz$Lم(PК2͍Qzy[,GJ%ht'e@7H2q*S#VDLghLW^T 8E|SrRa (? ZWq `M욿(*&+"a+&,.rSFB/StGkМv RϸAIPFIZ< uT%)5KL1>JRVp|pFoFQz,5eR7Hҏ-Z4O 4h@o sI1%Ǽl^JATI]Ćj5F$"W 輀r{S!*1ΠŭvU|+"7&d.Z eJF0&2G&mr̤;t)D+ˍU)[߂_~ї6'HQF{ӥC2}}Jq}"p'&-8f+ٽ˾TPyp*4<oleb ݹPAzzUE%6\(QX=mb2ή4#|gN RW?,A1M ' ~e(P%L7a =WhaC6B~o/pdb~vOt4 ӯsc{Kʁ /aZ d$|wZ&#?Dx{bUZpZֿHЈ*V2jMWQ`{XV#0JqdH d_~.zPs +@N5X D+8I.7(ext&ςQ{ٍ**խz +J_WV-d.NV/DRDXn.[F쮸$I~B1)/#[jւX/#PIrs ߏ睂sæw=PƍX>ºУ0Xd/gѲ-V!1T@`mpQ4V:[XwL ÂϭOV+FJ%P f#^QU[~ـi"Y<p=~]`D3dX`4dK12iS,/*C!;刦O}j(dЮ6l!"p;a8ml`eR8UJ`X_@ ƵǷ,Fi pX쓻7O Fj^'l4@Lrs&N_˟U%|&_~?TO6_8+!s7?'<̰,ɜqKC O 1M,ZI0w4ʹb6{+KPT>ǴF$,?U>yKV iB=T_ nhFRr>Eܖ5Bh5_xvf=ّ( JbWMxXzȐpK{=K*V̗S&/8t-xJW[Ꮫ>R4!yw<9"fħ%9[:mAl5ir4-ƙj6%֣]oӑ5Zuqn=+M(׿GB !ԒW|gM|2+gKF.q90Do#0bL gD_>FL8;!0p. GDg,_W?Z>W}:]זOr1ZKi%Cgѓg>Fm&Kh;AYU"DzE[]GqT^Mh1|ݽeeVO J3#c ({9D";p2$W="lԢGPR$Α$4_+jl02䊩TƬct6pHޣQhك cJf$`6b*'Q+FZ]eB1n^' >C\HAQ`ei [DbJhF'4.~]գ&]Z|6clnZA?,y\TRK8aP٬cʜ0M>;AldS^qk.y8^nLz ' v( n}Q~`FL 6-}vAs_.8D-|2ICHЎa!+G؛d*ybJog{rghn (u9ӧެy4KRb1jQ(X8)#7,clI:K_K;j{<uj^tzv"zG%Ad! H-ʅɛY Dm|cc>h:Qy].ԠtVXQܣS~ɜLZ@ǺXzxza5\cJ#SK$=:/t-seDNY,aKC+IѶNLkc0 ŷG1}n1W>ȞnǣB0E3on f>MUzvø7pd\{RTg"0R2KXV DDQ^m5Pn(FyE>SA˔'2.]nnUMmƇݵSΖ>^WbFlwdžSvZ.Ȥ3'( D y,:}?+a4Id"Mc< ɶ#X U8vۻI]{|SpM @vk9OmdC7rxu]ʝeණcU& jV:]m.Ӌmt2.?;' `UwY49tRZ6-c*p~wm7G A ].W{)FT:.xT RixT֝|هѨK[TV3kYRˊ4} VW5X^t?mxHϿ ].ķF.531󋋎nzKfL&u5e4F8i?OW.Pcݻ D)2h 1P;(LJMxۅȤP9kLrD4ɊB,  #Oo6.q ֻmuB'.MTg$׃ Vo/ML!s@|YR7cdє0&J/f9}uB5wi[8}hdpg =h>$I40zvӀcvz |(U$?i4iʳQؐٶ#kGPb/(IW\EݳG5R1..jfx睟!B[-aCA[okݠJW[(Bݳ/b_>C1+/.-` :B[ba^.1ԦR|>N9L/vXUz/=羄m'$K ^r~b(P H֟0[a$Y*LS]Vcf2 gւ*v %R%˜,ChT6c7ҎhȘ8cNpRlH?l}ea!Dߣr`` \Dwq^F|\F|}>pT|% Yv2zȚdo,EcQS1OJe&$JTz\i8,4CƆ\>.=TT_ 3XM G4F{J] N"ۥXpJ>phSBa.c:EgWD'k¥#(PWPћR{suIs"=P0ڛjU.4iBeT^X,f—K6ktAPϟBLcN!ג*2/+9ۊ5z m4vjeZ}]_za0_OoPM0[j=|A*`f HROCݯP(_P䫇[n(< NPrk?"!-*4c(wzM|\.22i +BՒsNJtška=͡IWroh(>Piu%kܐݩSkm(c}q/m6&z ]GIB)ToieR-ฯP%Ia.aůY9(gYpmD" ʍ)ap~L^haxϾ#~]կ]Z|)ltS7[%n7[p ǑUO }+uFk9wMZ駄tM#|%t]L$Pu]*(,)~l&0p5z 02X l\; s]pR ԭ`Pk M?--BPl Lm]W yr]kQ#fsKe Wθ=GLXUA3Nn[]TTENMli GtAiɰC&aHÈ S~'}4g TQjS:~n[ ƣ_OՊ΢"^-]ؙvmq]],ۉ<.mW;|0Kp2io/gȔuCag0R;ESwa6^yx]H8G`%k%#CLJe]: ;X0`Q"0iP]*PhŞ|هœ*[JV3kY **3}ɑj_t{*tt{AF d4ч gD YCEA'Te_x :0"dNXQ/? BTE[x2ZIJSa׵c`t䅺j`;o \+U0eWMܞ,nbX@hS#R.gQQ)k>8eW| ])ùT(奞B /ā\cc¬6/PKI2/77C1󭚇R`{&pg ŊQϦD/pv^ew&+}^74,}8b6h&*]ݹ*\-}-: k[P |DNT![x!ZQ]ϳQJy΁naBγlN3089k/DBU"HUۥ%{K{1 kD¤f:/yˡ`Q-0$LN9ȵn`l}3H%EA|х,aKo8؃sY\Lq-@$3`Ayux8\ǰUկKe&El4nTfK83?6gON&o:uCu DD6"FtHZ8̊0xSzҠl(TxC!QL)Hv;6̴2L)ŷkD nJrZ?(lxǷi) T̔^یMk,Yu\`U5!+aMMP0+aAM0FMb6Pm(^_T#WÁ..( ch}B'!hN(L/-k}NLs<kF.  e* ]Ry*r8XMq `< rD0EӜ2xG|3b#DŠ(5GţmGgѧ2.oeF} k)78RV}VH?tz7(/AֺW @^`Y ,ǰ:J sI3sni^ʉ,ݔeI$PTdVôƅVp^m;j:vnkvZ11"4 !eLi"mS)0c`?fún/($B zsbP +%뵻6pwiRP!2'A%X] \%DRbj w?"k㍝8~i,_j6$ا-RgGȟ~۫ rdbEwq%5КAh f}w$åq+?ӤWRO4za:M nLuEqFg18nMq~KKxi9媋^TȱCbm`0NAeoM%R;S¨楴D[ (Tk` @*Pm#?CeTu s{\EWu96qQF*QBiIч ZkP0nT6?Q\A ukb)i*ܬ;$nD8z#$$rp^E5W@J(?sܕ^Baߪ"2|#)53Rss+Bѹ[XSj攚-Tws}(V˿Zu2b߈O Ggbq`Z㒝W?7=V(|j` zC٪RKs.`NR#]_0^D Bnӝvw=VWdwӳݩ>-U( }U07}b9\yZ5ꧾkl=p$wFIT2=$-BiJDy7$Ma4 0345Q2YM&bjB6Xzv#H'b9` {"ЊbUUg2 yH-/O57&y5 aa|]uv}FC .wї4KUlKXql*zjp!m1ZwEg:k(fAFQUkl׎4l cb:԰*JngkncOlib]T,l ^"nO,$/ʢDn>UYHіS2dlEW HC%#QV+LhĔB"{/K/O┠M*S!ls#[4TosHT&Hޟchت~ eDwya>l挌`͜EEx+;1kϛ9f-z@ J*7ڝ l/qS72nj?ë́eUn9Xb-₉uC(9ˮo!ȻD:yU\V)aM]ԤBiP35 >< +ދZea=*?pnM;wV^fޙXu|_%|QT;KRKY^bG BJRQ6?,T5VuI:x&@mG8͝9]B5QS"Nx[~ܿqLl*6C_[ңPjUO{̑P:u%2V|5Ko^ mqTՙ-Z0LyO`Ü'4.x <WMr;nD <>FuC8zqS g6$PepޭQ`6kJUJSw~2$7BSGS8+'i}ˀEh4NW٧2˅BU})'ƹ}|OsFId2HEGuЋFF#]C?]Tj8R{_6z:tqJvBk!)Beˇ^b%[ψ I9c]pSkCpBj#Ms`R1rK $(e]Xos,\$XnمϹ:vmB)*+u|dYլ?6s[S"x指>STlQn"3指5!xqK{sFW=Hkr|WN@!1<Ӹ b|; CcN`z$JGtdvj+*.Xoo9*6n&j;v:9Q;7U]m7|/Lfjw#u W~w& %al7~gNߞzieçbZ6^3^B 8aDkؙR?#J F26)"99Qj"#!taϏR?@c^SMvN99}cNiǧ}TSͻ$sT(gEѠi+^:9kfy4 x.įk-KΐA%988a$,݋_Bc^,Pav{!\ٽ݋u͑]Ώ݋ӽpy%c}cn%c ٽ݋U'B3FN9{EEEq[\!n~c8'c|N1೪Uy 9r;dDj5p8 M-}ag58霄cuMe'k[r!xGV \/`N"TKdFsHl/ϛE^_&Uh\=hI1D-WCL%x/0q At {75?;mzua.1t)ɈT9ؙFC\D[wmKn:WRQS`:yzDӨ0xx** wo='u5t -牢!s\}e0kIN)'*S֒?`5m gGCK#Iw pFg9U7W1|sA;cNT-1Etm2FEj,5"^~2ap/i}9&I-w1,G5 Yv*`Ŧ>hh!m B0b'Ƽ0je+̹{*R"/p"xYrHI@;Ǜ|ԌO]=>d:APuDu jPr3Ú"S\ićܥ+&^(tl W>3,m^JyP#ӤEBy݋ppivܲ940Iv2> E\n-.5xLбL:iCOC!J"hZ8߬t/^#$Яwxn)pQOty,! G-S\{hN4|0#elEeƔ8Z"8XRjLҼ7 Vr}\' Yc݀(:lXn"d=DG$XV`hb.K6Ǟ'5#yw٥tzإޱJd`.)f`]/ږWGNBXM#Fh=YϵXlpD `d!p 7 OWNRqL ZP%Ĥ?%e6-luc/>^kLz-! el' k oK O-#7M (h&R!Y 1d곁Rsl#ՙX{kO]-6+.{s$w﷏#z>v쫝>ג>gй6rn֬S`|d0X`=)TT``#g^1SMDGKr\w7/_#T!*M{ٔ6W{\ i…| q}ԃnR=i]v.)u|:?uɌ !I08JJ_N9d/E5>or9᧾b{>LηφFpyKR^>& ?ǿWmT_.%_]+gtU K0iN?Jd1昆=h}r E$lN;eFd&g@Aj왫,!k;:5wxL2=uXǭFuN"l6 ð[2fEd'KNdbh6@z׳t+!/\i^  3 Cp* Bo>MS4R OHX&D!PiiX?SB)vtbC|ʟ..{<a=߿|v㮗?yùɛv;$& 3k.){~Э+{^8ƍct60h'UM06^g aì7] ;@|>%^@lx_O\

<~wq=F!@8Gm]58BqkÔMB2 jSȔj5;?Mo}bBeUHT-(f<>M`N]c.,֡R#@Qa=FJ7큮E3LYzJ4X`%_ia1&_1TmH#>q.Kt0 6ym?W&\;f%J\Ô*2RPPw*Esyp=+>?jԉ׮3r3{^{g3Қ+h#`.ukv~3ddu`9,MZZEXvHyİR[$oB"e4'L2V=ˬ%D)wo\Pmz2W$$pm?}z/㯗Onޓ7Ke+K+QQ^{`XK3R] wAz83+|bFz1:q!I4,I[woBj%r؂Yulm'j UiISU:F@m y}+*R{ܾŕ/;wP։z w3$Sib߾gES4V:xQVW@/Aks ֙M3!:챬7ehġ`Hl"OB4sffsϮ#OqRЧňy_@7W8BO\\ ,>D?&YX9㊛nZϫ xu}zDy@y'~ ;x~vqVLMI1+&W>Y͸Zm6YB,e2G,+jHg o&(ºh)a(BQyqp޼2cQU $毳bo!;a:Pi I5fɷ[8u_@yX)߳Gb9eeeܓٿY;Ɠ s)ɰ\Χ~H> #MALܡI} nA]E; Vh8S,7%|(ĦhR 1U`cr_LxO~lgښܸu_V'$n qy:tIl* jf$-Ԓ=+=M$Q/VknY}ɩ>$sC h~dWr|>ѻ=qk~ғ3Z'PӐѱ\0g Ǟ)-eA%:pW~Tzj?X[vGLG4Fzrc>7z;m=&ϟQ|ew c+^N}gSgϾ0z˳d=^p'{?K=}aV?ԟ~I?g_W[˗ó4BN[&'Cd @z?fЌMwS::qy;n "۴FZNE-L3m[(‚+H{͚yƖQ50h|cT3iZo֬7kN>q3+Lyޝn܍7%bH-_ݳ;םyL/2õ1K᣶ +}Τ:v襝(|tc ƾqu{O6Rf!wScly%( (A>ק9L $*aOTEFSSwZA' A`v܁b5#4vV!:Pl *)^\ZeE=ao'6cDA\FpO-\hw4TfF{'#g?v<~Y.FJe& uSU4aC$cځ-(grkirfC\`R8r7 :hQy$USsj'ȑgz\WX_7˯D8qd[m|9c5r*hh4mn=Q}̬5?-0նnȮoQgv{i"( pF [? zΞG8p-˝3Ʌ`]A,7.0ER;d;nI"T4F8[v<~f&j(!Ɨ_y|YA=2ڌl^_^'Lۆf#̝g`~wy:J1LGiC&J;ESi  r -ȞV_TLM-LQ.OR퀻j#p\uFm8L`{*]t /~T+ȿ/.s,vge]Ff?NoN&?#_?~z8aۖv-#~o+}4_.}>6";-<_q8uDHp=Cz!gH q=CzɤH+@k#[ϐF:/!?eIql槝Ӑq`n28E(wy&@myL%:W?fwm)fɜ82Ё[,9LDS3 */DpeSF44-{);91↑=Qqܫ2O@ ^FH9V#4| 7oQm-5qEd<-_o(1߼93r`˸ 8C76/4%+gX9ô =/j_241-(藃6&hP`]wyѼSK?5kȪvNP!b"U`#)EZGmLΒ޿"d{'fb-1Tr(RSh,lRX1/p D^Nߘ,aަb9Rk\tb޴ ɲl:Y8&5SN5\ aN)>Ie,b^\s%f]R `w]LIx^W#!z=rBgA  )[ ۀP5Oʵ`a^$`;ql|SrS}&R_`Yb*>V}0P4"Wfb))qB u?lvbp<C[ +ECG%N:&!LӼ̩`?" nhsXoל y$[L"VHz+jZm[iq%r ݻ?>|9{?ϝj{D{?KUJȔI֠1H `]o#ȕb.MCb)d&|9y۰T%bLw&TYbGDlO\\>aSIF*J"*]E۟ ՛ }1"Fj)AΫ`nYB5L|ęh= ٳA*zD5]C{L9-# "FkB?ߛ.?~(^mnoݻBY4Йrg]6׹=ÛiP sb%cnN8"ZlQgWRsȦZ. 5N\B5 `+ų f[3FIiZ:0cfue)/7]8 ̮ bC~0PD͗%4`goCE2FH8#` #iX^lcM{K|et:I>qJ**JP45%쀒kRԮ mq` X&]Z9 _~C,!=tFzBc&:w2zyys0j>!z-bنTD9`ko3@K N?8I)>u80`j$k8 Df92Vhif0bfUq0<$}Iu]J`˹֒rpjퟷ8b0 ~?tI;g(sWl?Zlo u:G5yMa8 z6ZƱ'-`kvRf\8gJYk;~谰A:gى@=G\Z+g\o@':>&rޤ3RKpH\Ob) 5B⛶Hv@+=cȏ Pk|so7"Y׺yrbf\[vݪK|Klƌ±!g ͔z%ӦOf+^}!x_G92?v6"AQ׳ڷs~Q$v_Lv1 40Z/,,%ޙZ4s?<_g"^*yk_gJ_9R d388m ߼'W6̴|T^^k"u{wN\#(/H ZͣyG'{/>Xt ] oҁNrtf<jH6ڮ5B:gط]%MN&=<Ӑ;N &$b dd=VT nZ:YYwJֱwbp ;#Z:[Ŷb<~}\@äY/21ӵnRClضF 5F>N> ~/ 6WgwyK;N^r'>hk~>() ;ۍny̠:܏v5Bb$8o&po'n2kcq7tag)4uO~R;yhsaG%SKkA5U1'ΕCkւ kCf*)(ŧ[_Vg4*P(.)Q_ '6D0y !Tp #nϐ"Es-;0͹jCkߜ}sΐӺ|b2o ЦSc9 Ɩ~@zug hF-TMHvDJ"n%g3bg r!@1KIBkh fIcaM[#cCv~l`CqO?{WƑ /w C nk*s#KZr߯zHI#""9UUOU׋j+hLhwWQ |cTv-o,( y(9OQ&g +nӨ3)-nGҒcasU)55D,n|T\whlԚCU8O18%@GD\h#١,@%P)(S]v.n@.qHЎdC/Ю@Ţb(bӭhJi֭4Tlz}U)6Uyd '.Tby@HG [.Ṭ=7ZC2Pi.dɊytxo)ܗ6xK!,A ĜD0)Ȝ-J3*2MR Ih\BC'"Z̓y@> vQP#ĜBY65hAvoRRgߑxHE:Ddp"ED*1@+Z* Cv)9f1Ŝ9"(+N7}C;A9r4|{$ {Dq6kDu7xҷOڼ0 c)?DCrY7cQfē3]|C3h)j!ݳE=$%/b50FHggEP,}˫\uzpW.>.ܿjyPOi%p4(ł==߾F^D#G煕I'>%'$dǬQ*EY&Q~ I=~A (K d|BSoB8 2E8iHpl.ͫ9Q"zFm'Z܂.#-]* Є9HHtQ/j4cBD`A]oyaoroC}(zPMrR2:br]:ìQs{Do׏?Ϙu<ohBM`k.4Jޱ͜ȷǮl'3bQt9Z}!l%`\sh6Uv4)kwQKMH afͣ9}vQ/hlBRt޷\1<ˊ+b>krX -銜Gԓӌ-"Arf"v\6 |N" R.C78睊k"y'*U7vpϧ=wNS̼.{+4b(F yTP)͢˶uunTvJɁR4btXbJ†_Vg~!;$T}_ᘅ_#Ѣ;'Jp*<2"(Z0=XՅʪipE%sL!dtyo3棲/ 9n []%/ Mq偡8o @[lI^+N*sjLj16τՂ1iT_a6`#3D. < (Я@Hef%bDl{u hK{޸__5L𡘗6/!Yi+rG}Xc6b!F>L CZ*lX} L"64o7HY%p#oDGr<}26q$ paD#+֠<9H5_Q zTq "D >TV8SiI,I$ 3i|k3t B1KK3`.JNg\b<Eq we%dGLzciEQď-Z3oܝؑPoL)yφF K9'Fr ^n 5Ev ~xhծ܂.ҹ<^#*@8;oHkޟs:.vZk0}ӿI7wKJOhZίξmu^-/.s{X|Ɵ.(4JPW}KLߏE.z0dtYM{R"&fǫizNTb(KkR|Vm'+/Bbb.z[yjA,ϵGril*w^A٨x˭1c*d@%T2@*9jEԠ6SŭTX>HvŷVi/l"H8CF7$Y|d?7o3o/? jK2ɾ1jw //ex"U_R Sr^i}.t/c (Nxbq#P Pi%"%O ӳ gl 4aֆ$\~51[+ QPr1Z BHCQ7ܭ0e">w<4Lk*И.b5C cwit)/ l<)*0mbDň#BӼ$ܶaRA{"{. â8h!|Ӊ~4G?ߟ3;M= b Ӂ$]!m%[RFp"E/͛L=XuКL.!l/4TmZR&vv6m1֋Q~ɷWHuWq4>✯u; \pryA<~4H: yp)ThE] Q`M0:N7< Nk{^ͻ^M{iXg^T_N=AD5z c֙ǷWkyw!K;HN%ZkQ%Trk*߼F9DVS,6PE[/GLy?H-_`j P>rQxHlD8xßp-W;B[N93vSEI"TR0]!>ѕ)!v"H>ۯm>oI;!el>Ţܯ,ya\Iv]aԦfzawFZUq-Ӛ'jc)>E, Ņ؀+=PAXM+B!ffq,1_`T/a@~}ɠ4 s͵ZUj k0# dg4uAn9m2M&l$[j h'i"ȃ`QyEh`]nέ@ܠՒFYl6Yl6hfݹG2ܳ"j8P@r|<1J8&s@pKwrW#N Qd>;U>^%J k6p MO3ۋ/O\pԆ& t wF;nQJs+%TV\sED-"]DөOmD%y\4!> 1؊M x̒Dt>g+ _MA?e黷mZƷ-EhTb>q ^Ȕ M%EL .Q6nx@sZ@@-mR^=sV//r-VryY9:DpWQh1ׄH7kEa/]>::q_rKBHS41S`UI=GK:8ht8$(8C 2j0In=:I蕸^0(yhEtJgj ¢"2fH` %p;|8:˓Nd\6=t%uF#6Q刍ц< b#L6$/UY|wLu(2 ܑؑZ,rd&9 HEeI(3ᑦiDmJЅxY iSq>HQ)8#Pq3%yU5EȽYUTټ=yxw'S35;m##^afǠNd(8hQ+iDhPGR J $Dz'*D,)U EEF$fM|=<ؙl/iHx!J:s4>4s1.ZfXc'~z>s/D~\5D|1R;Yx~nqn&kL˟G εݬXB+ /?pkt6_Mw77[gr6j}fB2*>5{jΑOn_RpB(bgc;@nykě %>=.gl!D/onLLw{yZ51v'.3~DϮbp9 8u9i1LƝ.}* "'2'B`)z*Z0:aN72d I;sgcOLė):ÿݩB8&bICsĴkĩRXyQ g \ZĄa7 }0M\-a5#{v`brؔAu"  ALJgžϋWg8/wABAg'.LH7.~N]Q4/qJ/@9q#&,;v 0OϡV aL&E& rz$A[ǡu[ǡua )%BCր 0\HoK ՆQUi;0 #4U<:IPnGYIr;o,̈t0'Œ8xy 2+*PQ dXcAe h%}dXi%(A%J(.6@E -̹e sn獅bmת(MתlݮUIF:bպrnȁWw8 D)RT>bBrU/+1oqM'[ȖAlglqe2ɰ n#tƐLnP"y)J*Z-xG>;M)n備%EO35̕vb̉WTxYꕂM0pQvdacO!$nf>fˈ僗T~?_}tn٫vѳOsZpR!(Q4R4X1 %>Frvj('^v܏Pc`4ܚ2͏-8`\]hdt)hNTžR!`\L?d#[[-JS)@YB@\ h!fP{/j"^{iLNgfsޚ}~L â}>$:j(쾈?1= IW>7' "u ʎ&=e@.@q5qby.@,(IH PvThql|-Y"8=öx ry}KT%9HJ8s^=Vpf98<%w"o5lfG~?^%_&箤=f ;Yi'x̵і.d-T%dAJHK۠I({?;U#!bKco s<`7y~+YbXfrĎ,탎ɍO$Rs"j*>}=]i [@9c qw1s'蟽y|%WK.{w=/nPopP~7̓IpmuMV(XPh&5,JH BZ^u5і+7W z <\Q*e ApͤN$%ZNyxQQ4(]}2^^Wp}i_}Ӿ*1$ }_]scݍ R(Ua%)*X颲RDxָǶ́qF|߯uӗ__ 9GPz Nuq $]' ? |POu붐C=C= èHDYrYeqYE}[,V`FvycQ&\Nvrm9!QpB1Yτ&A 9HZ\0j0;VZւ/P܅$C÷tK?A$蔀P.9JJNSs91PHmTvV7ͱu$?7 &@gSN|oimuXUe?dSЍH6?.ߏ7;Bſwg-o]qN;YRkV|K[T]|9=P.bGqa̺p ml9-[3dv>皫 3oպw6nFUd[7a+mPeԡx015U3FʼszL$0F7 S HHjv HEVi~ 4pדG6??EH&Xi42@NJ,Ue gª"QP%([r .'zp 3f@~Rl|fnf$eGhNJLS T_kk ,JHG>1(-DR sO܌j~ZP6 >*)Y\!k`%LWRDp+!ؙEnƊ94S1^U̡d(ؓ -{j~1퓡l(iم 1 >9EoxP#$2TnMt;rU:soĭeQwJE&AQmHN՘l~5&a]g?'䀟qH%d,,=qmޯMNa>(#>X/܋h_~ wTk t%%؊0TqQL) 3 1%ʘ܎s;gKjG7.Ӱs[[m]k-r"'ӑJp)Zh4@+ƻX *VS œ&- \i,+e*'78m5y(]oit]c|Tr_E}a .jA5,Ba9.RR5F/9@8ًMG%Rɕ":}+5;S ߑd #[vbȜ~$n487f !; zT.^Ϛ 9dϨ')`EDg'[O`AйEKK<Q|1Nm)cG an9f NPwkw+~WѪTf^ /Q*t)وU^=f#J# (rK(#2|[?J0)$&8 $3L35ySTɛ'.-FSU(-C" v=ҒD2CsD->\Zş_`O/fc[.&)fvqkZ2h|l@ӃY,ČgXP!PՇuP?0KRqPÉiJPJ!ІcUAUJ*YK XnAk s$iqѱ5 vVR$C1nŪdLVRBq RiRp+UU@zK-iLa`p,) džʊV* t"NޤUVMiY$Ek /1(iȨ#5+g@AJRiPAxe 7Da)BEkCԽd׫v6~hL gWus vDH åS#)R (HwnY+z4O^m`k-W7|\/Y$)R 0pX<ͣ9TtBdH(+>AosS"$-''>< g~FAЮ&Œ`4cyAGDVPG0CM}D kts#E~1muXnisF*1s|Zoy4П?BKCW]5[ UgGB7xC+0Hnw͍Cguxg[-z+&4gw.xc +^zĠJUܑq`#)CNn3^H!AE4Z;xVBFdz-Ƭ?6Ѝ]7zIl{o[dc%,[Cؐ0q3c`m3{I }D5_4S JChR2_4$II%Ȗ{hɯ GQa~ @T0ꙚAޙ{ScD?SHhGˆ,IPeO9٪(v%$kԂhQH)98莩7޿91naO l1_ܘ.Lw{Y\5, Ņ />ӫQuur.%eDH.߻ċOʽXk l# y%vd\6z5]M>"޽RrУH%lrD&%Rp?99e LZ [;Bj&uE}V_qyHɶD;!d=*1^$iPgӖ9QDp #~ `7-FP]<XF5f-p)'48}3ʣ ]QJ`g:r+ 5¸~ xR"E so+8n_xBӤg&ŤǮvcRwmmKz9=;t/E' a}k!)'ApV)i8$ I4 [tW}U]UR#BvN$;2_VU#3:X3)F=:(./ʭ BmW;(A E`vh`]W1>-VL[DqʰƯ0˝ 2ĭ0.; g`#+2 k‘jr=| W5(Xrfk_Rbn;}$ݴ>%馼aG.K,6DK ?LgYcQb4F,8OENo/MzԽrOa6{{5dKc{⛨2z>|&#Ob/1bj*&?h:r摠r?a?  a&Vz|\Kӭ@X_,\鯇M Y6/(m&,JP.S[\DȔ,ՅW9xLՉjҗg˹Tmwf9,XaYQnj)_mg{YQ*jr"T[lOyR3$1`"5VyN-2]crfœkϭ шtť+<3C4AL30(xgqc&V.yTn`-M%Q$h)V)@ChC5톇1v2*q|V-d %΂x w:s*DX1"8 m{U%JH ; (LQ&lTst<6Vh4S!)"d`L(P*$@F\SS؇, L2 5AA9^^װ *(J +*RJP*5'զ5ZMO᧵2Of)ca/C؏c`Q*p>/`qvO~&ptczPD46z v;LY,IF甀H|LRns] CXXP> A 20MTCp("@8N(_so8^RU:>߮uJ$mnl#[JHwu7v&|ѲheFˢmUӈLvbRpAXEK+UbA:,Sp>)vmE(Hsu1jk3P>vB$P^rjiQ,FJDKtgܝ3'T^ 'v!ag ]{l4O7 ([lW: lKJ=_߀>V!o J+K8Rh^6 a;h 6W&TKbQGjb\+P4 $D\3Xȩ"hp(@wrLsHb议-HLMkQV9)sQY1;@"ĚrV>+e*h +, v$l/19QxN$,8r+SASЈy׽u9˜vJIu@񵄉Rt020v30:E-̥e鞿_ȶfk ~u1a v }|ၫuw?Ln "2DGF;?~|w>lX-GgBgFff;~f1<'en >QʧO+l/Mw>q;U1)YR w VIu TLl`]{z$QF꼅Hb9ۄ]L/D|JM9xvpե a&PSqHrw'^p*S ו'>ny''R1(bq`cgM,[ )6ﻺQ11ŠZ~kU1AE^j}~+O>D{^o,KbX :2hWvEz PY1\ߚL.4>0`K`x+˞)],3ԉ4gZpQ멶.gFέoN:4c!gZGZEJ9B[-"X` L(ԀIB %tDiԩmcF!^C`3f2v͔RDqLa#l.L0ᰳWfg*B)שC\ Zh|(^V̇lN%bHV/N< 3ƈ"OU}b ^V)L3`a Oi~P "m|$5BT꯴EK²KJ@ ˮqv8<8AxÝtrT"$ln.Dg)3Uy1R.W۹YS%"xPqZ*AjFzXm'a&(f5l'qv\I`;<^Ip_IDx#2-m"lb0UަN䉷 sU#Som3_ͩnkȿøݺr!v Nۏ nխ$iKJD&J @e?p 8O&D1|w),F{SIχ7;XHdO2ql NO0ԟ!Y鮋f>e&BTj'<.(li>n#kͫO'xzX[:.;֦(mM\i7e'kr2,hwaS6u0EwqβZ,?~csrXwmkΒ S SQ9Hޝ8%wKPπ %BP7?&4`2"SV u#C!NCKs#r::{)KUZ(2VHYtXysBQ^a3 ұOw9(Hd9-/PfػT42_p]HL{lrכ'Ȟ%u&kdTjMv\)KŎ"3Lb(\Qv$f?ln2?ek]~\8Bxx<'8܌&֌bvY qԒXk{#J,cy!*1AɢXznf ~BB/+%B3}ŀ517qHb>mfg]Q }Ob|zӃӫpq3C$lERVxJxqAo :f[L׈/O>_`f_otާ«1`.gc`k(Fob6T;/ }+E1aLG;r)d(2Jfy8Pn8L!c|jom.+~I?Kn_u]y#&;opK~^1Dp# 7=^+hs%[ >4E0×PpX3Ն]g܌pW HЄs8B Y TLMR҉AQY>3=_c1'+qFY!M噤_ܷu`:RWk6)'Uxa"XJ$Mr246Ci%l]%R޾n1 blg,nӑy|GE0O;UX8'NẘkUkr1Zrɲ.a d wCwJۃKJ^t(:H.>(`Bq1 \u/z%Cn, ɋ>OmQ$R Lbߍw什lz[|iZ=؃Uax\o|17}M<-L*`f5v †ħ% Z#):Nj,̓(ه:便w>,6]%c uuWOj請|k6@c߿o~Zл7qK{}6Qm`?374[-׌{wu5lGLDQ6wO_ @oh->>לS 2"VsL r p>XwsMeښy*8To5g`Q8oƶM#j!8ǣHg$g\^VàRB=>vC=d : PKZ] y2ږ65D1! 'Fo3ZJ~4gR9 .p%$c}*vy& NA7-$B"%Mh[ڔlt gnxX)k٩b=l\НͮJW={NZ@ R6K%"RQ)ՠ oل H7s-zys gTl&#BRϼ5mmbnК<ќQ|Tj@8xp='kwAz=BНx#iDGGKCԝyٙeCL`{ڎrU` \oS"iv1]ˮrSNe&ۤcSC@ꜲΔ-.`C:/4Q&Do=`ڦQI\69͙_Q\9% )r"ztlo>F{[_pQtae(<^.f{o3_j:8//: oGF?3zՆw!OWŻlQ*8Wy\JE3 TQ٠#񈮨DΪx/\qLqĸIBP4 x1#B& ٨u$+ߗV8h#%fɭ7PFvDdxMc80;r NG). ,K)ℤ(jMk J!Z]@,X]~7s&MU.md.ꀣ EmȰ^id4 9,CsD >E\VB1N-mG rsJ%-N1ǧ0f?ۆdAgqA! yI1~G+z<0)18O,)RjbPsRIΙ??!~w(O }:ل9Ol$ܤNkQytrV*'<פԧi?ow`EwPOR(S|.>IuȪx0dzE` FDT4ѢʯU*;srv\V3C._-__dIgju>Ԇ"DcL.^ BDgˏ xWp[T э}Rk^VcD=LǾ2S"q\r"hBT7gҠۉK`6 g^bt~Xqf\t?m|bg<"<%e[^ʪ^yޮf' M *"cKQ=_̪ҖXqh5&z@CH PFYYZz|MӼ|T3MyJzUbB05Ǟ݋>azD"cKɻ&]hRyC٥Fh7 q57'{ x}-)ia9rr7=ch)ݡCClnX&u}a`-Jqpu`}.Q (ao_n_7z/n 0@Ջ_^e  l59(8|Qpt_p0@47O?feZd9gQdZ¼ʨ*N†Ƭbw\{7:|?GsRBcDi'I'ϕ CA0G,8 9ndQC B_8%RfMl'bb_2XwoO.BQ8"Qф[RA$FedQQ.IʲBs>LMCKv[ʜQJ{a}"z@~O,qJ\O,_~壷Yl.ݰul:w7~x<.ok_D*v 4%=ҮY) PHTcbQQO468&WKGOm6 UE"NjDl.?_&xKl:V m,TfW>nmwpjG~P$`or~x@?M]qov啋j_Dy.S:eԒ" ag78*C.+q7i0X|UQP:%LCh)IJch\qcceUHaXؿz ;.vx#/.ўTtقi1^OlZ#wæOP䜊%~Guqa7*R~)6%v8_3 Pr6VةTe) /%QcmA im,*HH̥U2wȤfn]4?k (}Ы:Ʉyyebc43'ˏq,?`C;ѭbMԅ`cSR%4?WBK %{)Qr/P"iebt ACh1b>ɝA^$%d][3I6.I~t$W#NcW]& SB@]0N4B'ub 5JyIxUpDgGpB JNj 50xn0:B p>hDG T ҪL] ȉ .̾#*T┞27~2k7Bse`y 'z,w?\^ AVA=_Ӽ^,k%bĸG E.tNY@9c-J"g50-Rz$KѺ@2Bl-2k%a+/+#^Sg|QQϦgIBMz<Ո9QYrޫ@Ti󨫎i}bόm͟+4K|QۖM|NY.]ǀQpa/`<0?`̙܀nMi6 =3Q%GG)RQé s' |D)!` hCHcP=-{ P5LJgn<9 r2{sp)Q[iH J0L'KrHPCmT 6ZJ[%RSc>- 07Zb {n ʷo9)ڟ  dedyQb8% 3"c" td4Iʀh;-x4njI#x!'ddo%p g2)+N# @ȹ2.(u+. 朰S #/ubtX3 c BS)"J$B:Rb8F'^X h "s4" ~}}lc_̊!qrڠ{2fEuR:z$ov:9Iŭ_eƻ';_~Sr~~Vs&c1?^/ׯWLJ@^MmW0wG-Ʃ>Ս{n^* Ӵ?OuL]sz+MqՊSS=F [)]S>-i#ں+պ!_8E88'uuS9XR P} 4S5VhWv ?{m K/ VNNqb `$Ɣ:~!E3jZph4r)Iyf^snذSs+)Y$!AH%fR:+ÁL-\ih! k t8YHT*a 2s,#R)mFsVPkZ,g`8,DyFGL-Qxg6,+7c&] tbۨzOʻnтz6,+7">;RG7Pw FcDp1M%V( $e)r$dvVNY9V+z-*qSwt9HN"GJpMz%~xaWaU> Sb/XOl 0Cp soH1\4$ւY5q썩jcTOga:~~阕J@wvgSۓ{+=ZFcaz=1*BљR}/3]R]֮ ca^ѲBծ'c #hcXw'gtIe^F}1vqqN9HNy{X5gl3sÕ [_K|[Z?^ mwXZ 5rp=.,.*z0AXk׀LT!.g:B鲹zÎZ -FA!y}%ƫ`bӽXލk"P6(Ftt 1ɩ^r7Snmn{u֣JZg5+٩r8V.UAT>nzT94vNg|zJ@N|?h3=pHGSdVT;)섴cy[W0Kt*dBT$T:OqDIϏyjc\ysc!i R @À4Qs% !eܘt7jr'H/6qfM2L:[rdɠZ3^2TYT 7$M>kk!YS_%"=0)ff02ѹ"fRD4Bba 0ҙ^YH9!#KDj"C͹%~ΨPS t_5 ]*Ѫ*5W#o޽k,MokZ'\{" @]+؇`?#|XڅKQVIe]Ph/'-г=|~|L~j5z̋ $B $C1-@U(lx,6CRRJK u9 bF/908C)\ ԥ5xYX @z(pohXژ!C#4 d46*$|M=ɮ"*TB+Bh<1?*nD5hTA3o='0&Je7"tKlff[]r]^lf*|}-<'nctX*F۲Q}9,s}#/1URRgm䒼o\hF[һ[ǷWBwZ.dq|\|/?4P`eG(Bz/&& v2*:cc[p< r>z. * =u_%=ZaC!lYf ԞvZ2 ! ^%&5Dh)_?r}z7nR gs6!nJd CH`*r*n"Z1ïs Se4M%H@%JQxcn!m~"\aA#.oQ08d{1)ZV~Mao^m{|XLï6q㝵ŽY+9?\S 輸Eaa r0~os`8hBڋ6 R\JAQjKSjYI`G@ڝs@׫httktf+~w>w~w>wWL^yB̀ʨ̐1@cSMAVX͵"\L3ej^9>oo&y^ms{nVҏ6'{Zljol*UL4LH4̇m6P LdgN$@,Vj`):7j)jSì<SK @DZVb)AG:vn.}5yr6jTIls/΄J af BC-2PL%*,3H(gFfBiN$#qfnXɨWti^3pϖESfJ d)<ˌa 7MMDR`+\e pUeیpƤS:2nu%9Y?KB,RP H{0TA4\_Q@$^̓^yOӝ/n7V?^qd'OVܶg7߬||IYͧ%Ч0ջ}.^'wCǯnP|(UKL+7v&rS|Bĉ&TSe-@̳m u]X;E8J_/-wIMHV KSi 3nկH)@@Lʄ@fWbd=#&nj;'#_Ozoc`Nv1ww{ .owIjre;qLKӿ#GB?62N\ O9{,S8S~l~jk\zNoFVwQ1H"PFeU͑Jg)['Ji0u(WF)ɀIE%`+5NV*3~gG`JBMNڬ֦:_ה rrpQFSa+P"4uM2'op­drQƤ/d*EħSɥ}Jd]%e& E.c[ IǦ \>.|+ɏތ bSWBVٕz\mYO2ׂk)gm {AaisYWW]PԘ1 W zK{hsԨ '*ʏ#M(5j&EJoQ\ޭgHџш)Nm_ζ~Bղvnjyٴ.[RhjK~։ vὣV-Z3in=|JfgiO}Ԩ1@O'^n@׽3/irԀ\*IvܹZu᜶2bSTN;抵]ZK.=\uit}W!iUzxwM^b|rd3,_è-Ȃ`/ޠu% kiy6%/  Jxuw1&|Hwj/7sGXܹsZۧ{-rB1N'G§/?=K0LpflEb4dSr7o2xۗޡrY-=Xl61z<5%Q|B[x;U](k{jѴfՉNpez=ݩrDw~k#]j a 1ҝubuCkTUXFȝ*QNAnhCjKnO{v#p:嚪 C Wwb8_j?:TrSh.q9I/ѽDeN~I䈦PSS,(̂Leq`D %F[81Ie)D*7$^ 9J ,'@C]uOn-珓dG31j>~p3.7Y73㦩]՟Lc<= {џmĶUaV .igx_};0OJ4T\?l'Wܺ[ɖ,+76|kz埚w#Zdx 2TˤX%7TQ-@鵤 XQjo޽k7s;m3ܯ 38XYe Hͩ3[.'G{uO a&o-P /ְW⍂#$E(D6!iO2X$$޵q$B ';T/N lveL_$K@RNER&ΥbzvtW}]UU- Ƣ7R[(.$"S ?#C9ۃ30M*5]KP|EF+ӥ5OOIW~"?Wi0|HO5Va*:0K\ g,maduuzcL- U(x eo)5H`tBKj-I\g򂠮&TQȲAkMUKT?zfz,l>-C̚+߸ZYprATJu U'AJ!tH{z=ǚ>{Sfp̕fTu|VeSahۭQ-F}E(H#{$-*OWO=ZhZ`H*G"f4t"QhC=HnǤ6_kY] @1"Bth$xj2. *VSsީ(;U>O Fq4i+ܬ nkÓrv{*NͰ:bӜNJ\7jT]9 U=HJW[@y`\JթrݢYAE47<5@qWG/:9m+pøkی#N(F'1c!YBk[-%8 7" l:lGvhWΚ!< ,4'HFܪPQۓ!Q{&wwˏ D( `E6BO`҆d +uِYRÂD"6lqjHg.Lj=^ۮo*nz:@x6p"-`ETL"*cgK금3Tl$vMaƹ'Pbܐw&A<&˄(p ѰeZm}1LQ\lH=+3PзX'MzsyZE CJ~UjGM͗(Uo:o[}XmgW |??>lp3xC޼~|,"{+`tهɝY_DDHb#_^dXٻ[H.SeG]d߹˟~gfo)еXdBˆ.H Z'۟Ѻv"!Y"E.Z6 s_Rcln7Gנ* (XJ<"tGY."z[=ܲ>gvWhXZbi`8齵koUql]"\`*ؕeI [FK*CRv>V$`TYâכ%R2oȺ @eGx4%Q-`y=8VŜ %[9R{d8M_`i J<`qD-7&V8J8~2RoILR_5-Uu܆[Ge_5AUIyLR_+EXuBmB=j# $x_LM子YQuU&JJő;ɗweK0?":8:0c;r1b0ryc,70rM 9^X[2Y1n]%SnNl|5b.iХ~y`^^2ieK{vD!q`>keo6BNqI9J@dk5pk =|tgN΀w ߺZ˭KQ޼"JyOiWB#*uHU /a;0`BE_XWOn[F"`m`o)l.EI汢:=_,[8 l:l8K67dǙY[L>=kc!u~(͇Ne],JgB #$!4ZW%Y%.'"1As%:pc3 z:}ŭO"?lϞ%3sp%Ko-{o?$ TkAfd0j")A2ǁb![hiҜn9xD>Նi;65'Z g<]7YBo!J=t~׳8 7|ד3&j%,qHgn]<<GM#;)]|>^\緓|=yʓB%?c@csWb_"f261U/X~'+mQs(&+\@lII/n'=/8QsGɭ;Ah0\Uj0\U`X'H>p9= [pwy,U{&SѿRjg\.FoYnv;}w0Oeee6hyۉNlV)e5. қE,"=֙Z3NyU{BLcSPB,"%iӴ$_8Qzbk8 L$1 Zd"eQMeۣ c8Ղ}TPFj,*qLl*=`W~p=>p: H00҃-ظN8մ"rUV0iJE<{q FkQfr6`R3'`TPTi'KUs=md)ȰjF1v!]jv#f=Sϳ VqzCQ "(7L▛/ۺsihvq~.5M/Q;L)5ظT 'R/?]gV$$/=HpAԀ3L_H&UzDeHHij: 58O7)M[,^-QP|w'8V" beSu Dž`Sfq Y5ؿ.Fvߎ~D#2Vt9^Iԙ?'?,-hAh;rVst.H=̵mrԪ2Uv9T{\2Ti UqB(_IM/ӅX+ }a"g((ۼA' UĴ GNaj8k(V>pfsk8Gb$1?9ݓ^1SƘY(7􉝊=+T< ?xzt8?,>,|5{gVXl5O n+Ru:@F2Ƿd=TZ LOӈ޵m$"ݑp޻ m#^l\뵢4^ǘ~MJ^#RGX@2H]uUuUuwYGʪ{fgoxZ5yCk#DmNgqb09omj+x\zEв*fuG5$GR<˪j;VWUoK2B's]{eËn%{[-/Q 7.[~VrlUJ965c.L#JdRyp(7~ՠ7v?l0h T@<"/WϝEܧK NˊItA*zAhk1>G]TZ-m@5M2>ႰR8t-=li.ړ֞*c^h  rx˜"; DbR"cHhb0U|ke-Xik?Duiȅ @.a[N:?C훧5s$ W#N# .0<X+vxO<^ENH]MYM0jioMhS(:T7;v4I]ޮEuTȑ a [MPF@ -Jh2jq`C4'xIfdk4%euaa8Jj50[ӉSo1bya<6,| #2`ocA/}ua ueW.&ThoثU47ut_u,bp2.Ĭ!m].=ڠb=|"Hx{ۺҙgnM1":MۨcGQPi/u֭1U)$+F2E$/f"|TmԱn3}R`֭1U)$+F2UڌrmEXTmԱngGU[cFZSHW. d #.>ǹe0aaݚbPEtQǺ1o }Z*YѶukhUv EL)us!֍iq[C * ۨ廝o~c^Z@պ֔ʾ $+ e8C뚘ͯkO&NDw,›gO:Cשv6PkFOfs\)ڬM%.j&sIlfgW=4NGaC] [֥$Ȇ m< S_5g0H<*k{Rt*XsEt~{1M^G*ىTg5 AdG04=Zwl憓Yl/lˤpvN`˅A7 =! 0Ex׎&ۅW\J&z:i>nwSgmD'P:QT@`%bَ|ClnDiiEz)6X4n2 Off?K$WuB@^6H[~F) C'?t܍2BYc&0NPF|t9j2ƆW_"s:T#;Р >axjPSdbB/ (2gn3ەƍ?ڀ*4 D].ri_ wL2ȁs@YrC{ov~A#߻9{=4' hZ=yv.<(;4_~-.y9n] u_djH07!ռ뫗ۿ7?u>nM Wj#n&|vRuOs@$+ \<!`O;`}P["4L*"T(͸Jb, G Ԑs'bɄ/bm.e^@~Ro $+x8?Q;P`s @e8DD֊"YIlLq(I^ܗc`Ofp}xw^Qy 7 ׇ~P?$/=% {/}:GqpIh{* iӎj  M SW_}reE0e1j<و0<I gv@JILӶ'7[f/nR4Գ?KxjR?߻xX=SߤCϼz͡G~&6(JaQ˟LT/)^SybUwΗOh$VSgOoɏpTrl:wᅠPӥwnz^bwJ[dl~)Ww^=@ï{k{[ZN݂nzv [h/~E2ZT+Zђ]Ѫ>,[%GY^F735n"bHRܗwOŨakXiFQ I(82EcgvQ89, J$ J;ޯMTiLK潦yikҼ5S":HT$9X\ӽEfj`rܓVySϦ#r yw8$Ąr P^א:K65}~o)Ph"PT|Zw*9,7z U }˧>v0xrE/~h#2h5uZtI7ǟ5 K;*ٓNm6ai!(P #9Kʼnズf-!={|VFUYb*JZOw6wj_<ؤlR]B!i08UXբ_r?1D뺐tfZ* TқgCypE% 5 ]qD#&;%O7z`Buͼ1b)eJt)[#𐃅P= 5rHEPeXIôEE9GjnVj$b+LS EFbym-MjiQq7/=J$Q^DA X̬`2M51 vQ T$&J1 iM!1AQkQa5O M.ܩ>©Ŕ(W  ) xsB?QQFD;35J""U8&⎓$4 Q, I4uYhZoFuX , l2a#Ah+9BLEkbID4fG _[N#Ib ű9p %p, ;w|8Zmq-$DZS,DPI4H(DΟ*-|'rO"?nDQIG(("CM6$9@1Erwq |a%ϞuirXy* Oc_r:]WdJnLb J|2V~9r!\PO|VB?'wbϑD ~gc~s=*)Uh V&{c<(.o 䀈:iMU՚u.rg 1Նn̬ExYʐW:QW8z3 U[{9Adqs½]E;jQ_ʌ}E*qvXla7~UmIwj>ʷQ`S?8FdfٯvaG^a<[vP?f}R te/+KӧM%R +{8uap7vOvҽe=E¢\.%6aBPLd4RGʨR_M%&"IQ։)q{fr~dߺPe(_q-M|4 B3pj 8gu_gM]Nu~U@a.bCX穛0MG0gr`jV>-\/_X鴤ﵚn:z{:ff/O뚦hY  F |CуC]e;+hS(8V)E3D/KCSEj*҃+z}g֦WmzhmJGA}~DGQ:{~dt5F?I|PFfC_NxdbDb{]ʨme/' ;niG3o&q8I!p=?n E7^;wy|h-UQËy|{kQ 3d1>\.TtM'N|E>Ařx1q܀iM=fr-vEf 'wp29!~E*>Y<4Ve3? QKߖ$o?ZN۵Il .fH~LrAjǹ.Ec%No8\mC {Riip<~V: 淁GI c7N8W5kE4|e-{ ]cKBXW(TS3O52OG 8\y唘/Vfdϵfu:+Iۑ+Ou![[]SPl F;dM2u6|Tr0mg)a+J.[Nƽ);JN=niH>M=w:wU58aDy6|ظ2Ԅb%~@g~Lh%>w(cg Ɵi@)H6 ӏn΋͍p8Kȳr8z?8N4E87Ɋ:M T~aAo}/_%Έ@Bd\'n275}5QCL<'qHzJ<,< yyt^IDId)p`8Euuu]]UMgWq柷EJ}%J.@w[bd(hb2O9LYre:ZF ~l:cȧ N<С(F|hK@J-íCf5U 1;s`:+tEcا hp;Ob+&;@n>E][9AsKx!9˾؜ptayRq.lk5<`vL(}c/_d7șKm(!fc ]CLS:%/*B{ :{T[Ie#)DxwRV`AcܤP|3r;Og@,k| ID)D1\:WTU-yyv>zV ,PWsEpB9v2s?e!f|/9R"Ye΢&;O6<@k%םUD(gh>8mJ39B'wWDF_!3I7H+VWxҶQBBQ^ <ªP*Jl4!l\+8, [oh6|ۘɵ"l"|9Ҽ6^ ~<;\V`jH򞻹L6_ )PQFHij9` 9V[>cҽTڷq@SLD}AZvb4&By/Dv%' BD3%6441QgG1ݣr$ov$Zp,je.he_i: 'Hs*M0Eҧ*O8aФΒG#%;ѫX !y7$N漅f GCn07(ftOL! JAJ a?iCyR%xpTṣdW64N꠮Ip%}VU4ZDjj}ݑ/ ‗GPC7=>(=f_9+5,*mC|]ꙏoGSW""%9K>du@s).c7&b**N_1h$J`y8{rڰsWa1pkY5r~i1grW'z'k;FX3äe [[; cI=z*^(} zg %z&0 :ꕗBc)QK4JxiԤb:7ݻt,^AZiYY@xqRR.{B9#羪}{R|<{Z)OTj9AqqU?|lŊR|nB'HebM݉UJ kt10- ׂziꢅeaO@B%݉Ø-س1l5 D|z}uGKTH0Diߥ,s\!a4KB%h/.mDJy:sm&rI< v:~sk6eσQqZZ᣼2 kcˉ؄^!(=Kf{%C䞢~Bh&WZu?Œ$#f'ĂDX0(%ǒ%tǧr6Q)o2'iE aϕTL#rqRaw$"Yȗ۔ufm|u u7ճ шT#nX[f"ڇr$ݷDsk-'%<:L{|.&2Q=o5!L3O+iNDZK2 olnwSI&'f*jp`>ZtGvfˆ^}UNS{ڑP-!x?;֪֠S{PQtueHxx7{doi~a 0Fq¿E܅;ˊī)=N5'`8T{!^8 A0:VZQd󱮀3wPt<pNQm0{(^/ m<1T+kF (%Utyo~y(a5<^?^"NtOmߋ>jW{X1B;Xyss _T:Zr̓$ ,y`߯)8ƀo|췫plZ/A 艁~‘fts=\kH0ah!Td`sEMNjЅ_ou@1 [XLꜘs<-E lpʜUpbˈ݋Cv,/?uu&ǩ9TқY2';eo V̞൷XZ>EU~K78 4fORp:R8M6V3#J Xii0BuJӢ~#?`chpzEd!3}n<v~:oNrg} y\_5yϏ65& ^77 w+ wXd7!"D Έ _ h\N7 ·8 z)f.WMG;$q: ~H9o=\p228(O*:V<$ޗ i)A*S;e\?!8n )V4S?pϯ6V&1?NBť?7'Xt=pݻ15"eiL,)M^h1AFEFVJ22$D*ɂw|KC{=+;0Qcbu).qI8bKj6ʝ=q)f]4s=_?_ \^7cUbO7X>X `ҥx} TV2!A.c5 Gt < _[ .M y0o~̲b^~%X@q GXAw'w-]Wwsl0dq#A8 %)Ŀ)7&M(6?׼1 |߶c[mL' =O 6@*<\ê\70uOq: A赛6=^̻1J s]雼Nb}ئ`,y>l>lw;.$\xdY*$|9HF ;b<̰72lˎ \zf՘ ѿ;= 7Oj#%~5ڼxyٲlMYzԂv75i:wW7z ~%GӅ[!zVJswj0qtCn9* -vZ^;8ÊT'/[niJu6n?1hH}TcCjO8`"Pz_XՕF3 J򋽇esN!ydz}j}jw}jHUo$_~yh\ZjLrgHnw+ɵ?) cs2 cnw>q!!+RRDBdzezxzJkqɒM(LWDq.Q}%)c {i9Լ6(vW yIleTH% Tv D fD^世̗(͌^6gmK5 kg+Q/QmHr\ p0\S0INMhyMj zބ;Ѝ89įWL^RUIf~}ծ:cu%&Y9;ܲ;R-Q\[K}G7k>Z\81uC*-fƙMf9FEUjkc<.15ժi5벣wj0/;yhܴ5G{T8Gb)*kwy8G={OXswspR8LiȏŅwo~~ٺG`^Ԣ{_iz5KkBWp *KC0wsӪsc:]=ԲrM YnFF7*R?2T:%!OHiDI[LkQ&xJ0A*KRXT 2AT%{:&Ϝ4lAo>fC7#TxO]tf~c?<<Ê8zW jW^o&=3QJ0r6e! 9q-)KH=(yuz^Y'9u*ᜣ_>֩35>u*F4:5!!'.%2EjE}ֵoP GtJGu{Okonňj&$EH8LWJ5eZC)ZA' GKG'w)J(ʼn: 9q-)^ʺv+*+Z!.zڹ0r=,=ˡ}ZwʛtڪUpEOPJ)o#uԞmGs֭FK}[ݙC;FJ?UG*q(߅U"hҸ_rQe-AIt`,[%i('#3)V¦P q112JZ<6rV, i1 G|TRr@ ?%Ժt,AN"t}rfh`\e\er"zL)jjI.נ&714=nz QrbDs[r"Z$SI}8׵ [! c{nEZ}RD}$EH䁾[q)lQg\?h.%ϸ@G983`Q5hN0v< QQ %pўM(?^/%H6\@@)X+83,MCVhV^+섇 5SʹCF*5E`YJ=zTҙI<=b_y#G% HȉhLi$z i7)v+AvлAw1ڭ 9q-)P=up/f4=n]F1t4VhvkBBN\Ddq;yG$!?2ԄÝ ثUwjxʎogn$EH_sDKz!YO} 5zwT/)F45zr"Z"S{M^-AVI{@uEZcuRDkуeJ neZE-e|?ܙ/wm|ExlF^ZFձCAV;7,ᴖ'jǯKY lm^1k[K+%-7^ :G"j֊A)RT4SI&͍£RNF/Tnݗa<$ג6G:_7#տ& ,9I܏h:0v{}5!Lt2/uV d@aV ߀~ۇU߲ RO_վ0xMtsۯo"bpA6"i4ӻ3Ǔ||=O3@M;#3u>.w|? 6`Z!o +?l𛙺k>qmM;\њ6Mj3 Œ܂G5/tëBF733Y!ŤR+"w`RZ uI̿8HA>Ojo.ߌ/F7bE<0.{ /1 #׌9sVhyֳKK,%EgrNlŢ5D7rmKQ F)fv}'UE&kJ):$;oFaCzC[h/9 xrr~/虽5M|rݑ_X/S<3 ^2%H냚O\`jLlnj_"/]skYKC>*VYߘʃ m$v9|s0V SEw?\O/!ٷ3Vaf,RAtbk^4>?Uj_@T8BCb/IY4)oM Ab%9lY2ݯfZĬ%[js1yKǼ/nkes-L%n2hKGҮ]!{Ī'OoI-"Y %Gq}Zkx_u/َ>`uO2r2ْYv;9Y:n nzcF: b9UnKW?~|<2Yw_Q)qҕ4+T:]L:+>{6\݆Y|z>~O&7_nƝ!eK^u FÙZ+b¸)GFaKH,ɫk%%GFaKj]?v/b G]p6Vѕj%3S\Dް4)͘x]k[wW0f3VXxgcs(eiLۖLmCWJ>X=Rr\3th3ޜ2tnIG16]cvEL슸K-tZ2߉ 03ZKUpjk/B Swb*{ ƹ0 V__T4 #*0@+j0BpuۨXئ=4u,6m=06ܰu= n_3ϑVtKFϮ 3z4WMhGwrLmЫݟRS8~wQ^Q篛k_A\MsU(jcݎjZsh/6O~ض yj,t,8iLhaX;dTmd4gXp.KG Hmdeè.Ix7競,m3PIoQc۝8E8𑲖 6+`pi%zm:ImDFjLn0?f/8$Hv\$BgMN d#,0E6 d#ae\N^:Pћ|9CleJu1<n|+!2EYT}k/*V>Śp:6:$YlfQhFO1V`q[-5ڋ:t4D.~fMu/Jc 5/99!` [GXK95F1UsAWs*Y Vϣg9*Ydu[g93$'j |G{+ -FS# xSJ_(Dr 7*{SҬ9CkoKJsĕ}&2D$# SL0 .3` 1h\3{F k{خ,/%4dH"2r\K!Mi䄠qG~?̫Ȝ! YE%Pc*gcMpRy+:Fx]Yo#G+^>}q;ig<[ٔLR=$)x:Dq6ܦT"28,TsO[!TG)}JxFlFO $5i Q  Pge pLRОYX`ԙ0b\}pƁ*lj*bFb&:AA1 hpMz8F0*idR!<IY."QZo3:LziK`m!GR Y+"T9 ˉ^hhN&(Gi^G F_Q Cc4g#Xa?[PJe*p8W)!?P ahƂ)>%C1L2:6L٧X)!bP $wJJ qHQaj kkDBY`A8QH8c("`XvBr -0AT:r@xiOcRԃ|=m[cu0THR);V - (݁T]Էj@#|[*!͐J6O In Yf:[ $:j`{vb%R7z'(E80^%= wQvE` EA(9GEIX'L5`DuݽS'Uf̟+aþL4ζxP ;8\I.37p#&XOIu _dL77pfF3w!醴6b;,6 H>[\ ~h~K*j~rx~[KH5%‚{^0^4=nfvѶnQ;̟Yb *N^LnRy''9vi_N0?\OÿgZ}Zu Z)LҐ#W)'\Z1 0lf8^*M&µ&O)R=  @̩2kM>\kWvB@3M)MumL2bX3sSPh2SR"LqBJa/7i8/*vS h]ZKykO>-jCH-1{iş||~wuyBR:^.W?2 ֟zn*p?Ka1X[ۇ& Rni ky1J C(дЂZOu!oK;߼d4ȴ xc^wjLc\M܄קO3^J.(o\'3rc!ҩ(Eʇ0ZT䦛\9x1A-fF%=MEx b7!u%aLxyWhpcC+NoyUqԓX)&cָ9LZ&O^-4ƙJ&51NzvTlBЈ f+w&%-Xi^F,t7F3So`'HF:/gF>³^gpt~t#;+~zv"=UD={MIUOhTgw1q$]k$|nDRMvCh9Ҷ+vڧbZ讔;= ZQqӈDEz*TX_ X(t,{,3!Ja-&לzyd;a`1$nFA '(B:D#Dz5"xOCi넷mW'W'H"vFǩ8f ixK{ L S! OPWLCUVN*@]>] 3Ԩ Rz U{ oqwwvVR (8RNÔZV^¸|x&3ebĠA@V\:K KQ&y5A`*m,܏|OD-iˉ]yO Hx͍Cj'qRОkV _iYEh&-^xHYͳ襽F?3y04{9,_];+Z :hv_H#D0PVYfE/^eKh;XBL1ay_,?ggoŧOrb^в(+Q$ywD#KK-Rd}[?푍R䖢oλ5rvsVLІ!I׍ϒ5ri$:%ppEh򭎦ug_'ξN}8ֵ^ȐJ1uAÑcf)|OK=GT)pTIn]tFk8qjӂ%n9HhdTQʅeT .),W k9jRV@,@u i MU%#[!Mw$[Wіejs}GyoќbwODEWl1ť΀mC|QB/}{Lj//uzݳ~}[aYH+3-%yq8dsᇃ/w`?lB9Er|%֟lÿFmӌjtwD6TB%v?;wO5r>*EHNJ S="՟6LFChـŏ4?%){o#ѫ^ :ZS=i݁7S"Ʈ}I&/f0/ar68]kő̃eg !QkL)9*J6-A]iT@ Ę\ ?wH,naaVqJ0տķ[ O G .Dƣ4-*R%4h' ,׆y<(Z`jAM%ÂA;ӢC`V9ԓ|ښf//[Eɚj+ fFQb|L]sf/m"R DYd~b͈Q#x1| `t Y>}A-/Q7z ")'m5-O7xK&Mf0-X54 tO ьi EmM=ODw.¦2l-x- _Ly-ҡF[kDπ@\x `1>"D00Ͻp90cROBH7nՏ 3֑| B(RB|u5@oqTJ'>ujdsw4ReƔ7'-X3zVqQ>(&E,`Z#$ ACҁߡVcTU|Pn1fh$=)>k;Kӷl2XcC>,C?X *Ȳ/*IM2ʢ[{F=JrqKjHdY]5QW@*®ij4vhhAXkꐄ+BJA*kjy Jwql-aC16i,~dd>2xqq4Y4qpQɼ26& rTBc8C`Y.Ic ^S+\B~/rRȅ珅 y@yo d_':.7P ^0IVi㸶:ŬZK) 1@k7AH)݂Wv7t5XƢHiOSVXBŘ9F7pHyg5XZJZ:x{~󮓝wd]5SmB8);gE&1*8ʄ dypBk(,pf,pY+1kxȹ8 wl*`VaeElf9F-0@)"=WzU" y.a?K),@Y!IU|GV $3|,^lTH*04i"H(|j,|4:@6xEd ϸ +o$T!q:kmH`R/pq'ه@؄%RKRvAV)jD {nZꯪ3 9(F1AHL:sg#IX*AY\0D[B͐h *;%5PIBR-y ;bՏÄ" Sy{s/BXn]D@끿]lTHs|ixs#GZÚQ׀g/^v}s~BCDŰ_^7͗m!obOH\ t22˄'xXMZs*FE`Hh.=!8Etwnj Zt#YxGtO[ 1FafLc+04|| |Q)e*ǴXQ\~YY&gX&(_d<Ѧ=XlV?%WRPUX˾RNKFI$6pXq*+q+&QUڍmJZGKSz PӺT3?c5c]^ߪ4VXÏ|*G&JgX h2΢ yfx=G&VWR#|(&=3Jg2<5F{%l`EƜRX1zݳ2YoQ,`ݒ[t+ K^`1,Lj4 6Aև31yg#D;i  u0Ž&cnV4F eu2wtoAZ`&93F+˘B =%zd@RB& "D:^$QYbF_|:r-SjrÈ;A%˨0[Ē, u/%>|SijnK18IsY=2 x;mFUawD<tByi|lx3DA1 `# F@Y<2Qڃ!ZI#bT wz|=h]s m $y!.$  ^`h85m3k2 h! f1]81YT f?,:(ߺMU.e 9{gK,ĞMg(y}iADS"m?Im*BEW{@@QgZL ]$ -gUWr;_HeֿRcK4$;tR9= ?%dV_-Ic32`U+, F>ntIk(i밥{Q痋 3ե%SJ9>Tƫ!9ťu/8\(g^8}^⪕j4XF筣,)p6 5*,(P qZ⳪X@ IDLQ$ K& 5*QSˏѨI񭄉2~ [K0E $|٧M[4'5Lߟ:L]~o뾞ǨDgܧYL;d qvW=ay\0ƺv}ޮQٽJFgP6X 4 ӌ(EF^hED& fq(Eg""}޴sS\sg!YIʍ}2rb}: Qq*wXjvgA BB $(ܕDc` B&gRCƛIA8N8ŋ[I2*PFMg n3!83 8Bij A0{9Ϟ$ ىPq/) q*)Z >%{7ߓ/Yl# {W 0I̓g^[ꃁtLge{T,H*g{_َw|0߱u|%|LW#;tfo3洫LNqVFU/lϿ F3:k?uv!ֈ4a-L#[]^$ܔx^CQ75B4 v tK5Ȫz7h+A P>%ɗ旂q-0] Mp )vr).RoNߏC0OWFşmEߍoLR#q~7:,XlK`WY|n]l^y>ꆟzHL? *_.w"9~MG7h=k6ugv+Q7عp7k`%XpOϺ1wBA x{K[WuH`gcZ}кV=w#*JU6|U!Blͮc;mˆMqC&XcRyU aO-^PGt R[ )Lzǔ[a <;0^ݚѪ>"]B[¶uQpq<kƷk-(bL7.|~}}v(s=37q $0|9k,p3Akh㰣qSGܯg;cg;)omb4NXywN7OW_uɩ7xmoa:ʾ^%{|nk݊#^Jf!4fH$o=*2M憩t1"a4Rؒt2=Еf %BY#KL6dT8oن_%˃%il,\K:He_Sh45~aj콓EDU!.'F[<^I@WR8HI(IQOݧ=DypҢɑE{ORi*] 6 ڕrQ*5 M׎j5 0HEWaWW OoczE[iٍҝβyFzXFQ逴 !\4l(ЭO5 !KצV j ҹ|C˚Qd[?$8[.ɑS=Rk$:U:ѕU;OC ⸣]zExC&ʉ@qqA/NuA`UF|tq2?r lގG`>Y `@dս16c!ʃп5YAG 8x>"痯7q;NgWѮ9S [1މ222(e@!13dXp68ᥴN!L5$<,OppXAyS+v` \L l4܌`E R;z5x77ـm݂, 佁R[" 5&rr\) n~o^>oK-ޓPcȼ^ y,'P)9A:!Ua$ˬ ys'hV9J@^XF *qױa2YXZ^4poN^'q ׯ{3~Nޘ'#W ~Ww{S`AT_qr nt ~ k*O>,r=׷WF 3ե%SЂ> YP [Ps9o/B' R`|C|c],q *3:SygV/"0Qt-\#X`O2-D1%s8`1,Lj4 k2Ch#D;i$Hb0p*p0I8ց!YŬ64 YLyΌe hL`cPJ0vl7Ju`4X E\0s '2ދ>4ޥIf+aL)i z%Q~,~Ō8 Ɍ RE2_@A)RY1^'oP@k?k-dQjm氒(C~ FbqZISL;O`m`6稓rQK,ޢ>JW\p`oۀs35uvtj]NGM[EߺE2+/pt_$7KA)Y yϱU N$2BS Y@F(6jh $Plf0J{@%CHaS%u# 2hQ+,*[,q0*q{9CfTeVY`HxG &BBQ=GRMn`΂siUGRRJ,U$9/J*A-ɧg ^wqV wOL/4nZExUݸ:ȴ)N96n{D3кڭ YOQ8Misڭ)N96n{.gкڭ YO18* AnMqPuαvc%Tn߭1u[h#zlE+?5eϮILc;e/U vUJ~ɥ=V<;I`yI\5\UuGmH;۫VLu*]J:sݚuU EU(ˀwv%APvtOh貥]UIv;;M ΐ]fˬ hep`eֺZ _fM(CZYkUc3&)]fˬ) 2y|5ɥ2k]fUI_fMJeֺZ A_fM.eڐ#eˬu6%AE/]fˬ) ˬB2k]f!I5yJ]WwˎR&Tfh3\p^;D6&~E_cS˿rfUt/\b4c =/Vb<`OiQWMF3]!_ ~>/ejH(A ~]jy;dh?==7gםr@a~MF%\$1$6`᷊,@٭C現{xTy?}ylgP@%սG9tRD|锿E/q6J_MLKX2z$hn2"lⳕ _N$/a4;L. H( ppkxl4<!(}!*˵E^T)e9jwuƱe^q1qKXahlccG{HNz+W}Mzj$?[XtB|8m'ڥsO팊Hki"CER,wH /Y[;hMaZ:+y8&:|H6տסʖ왝t2U_29(?Y n̿3+ŮcϰBv'ٍ RYYYUaX g)J 5)5[KO,Ei3ҙ5,/G#Z޿34oJB=#}[rwH\]7YoGq\> şV㋏f(4<R~(uȗsߖo\8Vɝs y@NiPuDJ L r DC6j4i 6~=AT ÷D/FkMq&f>aLJ5[5H9%lEyBD'-K $QnV1JF sJ1MHVsnG7-j"[ .*Y6Z="^4&F`#(Ϳ#кV}1ɣxysBrV!yWi2>HFI&)\b3`P@+lq~tMLT*~+xfC -=Ҧ~79vwV Tr5>!\15dkVrB]ZCp6ً;m` bg ߜG$ɊR+4K'_;_|Moh/6 ~ }%">ݟ9 ^Ʒ(zv) LVn 9 &^wNr \"Kʢڙ[4rm/b .>[Δl ^m"S9[v56 HX~7i뛴aң;Lζ^tuOo4_zc~cǣ1tq7 /5ɕQ/KWa:Lݴ@yAF*/ܗp3Gr+ Ű'\?{clYK^pk.kxZWE䴾,~&";3T']ȯ_K+o#Pw/M῍Vk \;pMCad4vʤacNFi;xFnPہ5[Ql:]` ?w{XO>@Hi=xnύ+)$;l8N}N-שph[? @Ԛ~6\"x6Ӎ5{J&M)ʸ: ɚ6M$KQcaKǃAaZ|B Zoܝ.r&܍/,ĂD7$]3e"$SL{q]z:5[ЋYm[7_-o j\Pւ8{n V<of`(0yuCneA0Trjv/IvR'6deNSWxo{"N&5/}H>l+Tی.`;ښIEi-3B( QBbxe̩`rKӦDgu߆D6J$jۦ5tqKCo-mOE~soΒgI )岀jr:|Wvdwb֥3xO\aT]ʰK^3ߚu*k TPv)(2ok=sh Ab+|1KB T|Ap!PPlU"7W]AST&n:,t!وfFlfݎVI/h;RHUIN%l95ivh*1>81fwt|)}y"̞8 I A'\P8a0+]=%O>TlpX8Q{lp~NjQ9$' h3\#;;&kB'N 4OUZ+o2(d R^HJxF[4Ct62TX_W7K7|;X& m7;/W9 UdGQdV+έL'"+SKM4rTRζ:.*upٔ;} KcN:FFR]J|Pdu*jYl726{ZEL6QA}Ĉw+{=X !J)?Pִ524gejOz:)bK14e>J.mHED/5²1|lF=YH)űJ2*k )V6)i[צS m\zdF&Ώ(#) zf_2D .$ )FE/[B,E #@ƸPMd ZR-=2TǠ`0q.UA'"!A %H5 D'-rVh)g@3@iЍ#id ճ-@hVRE8G$alDC \} %)Ns3Mi#2Tߊ4ᨇ[=" iڔZP y" yL-b655M`q^cY}\`R C 䫀 F (%ZQ$d<8 M$h2%Ų{Nj,?]؇>,+g?~r- ]zˀF{*g#33̂&, 2-tBcܒؕd&+u7']؞ͅcw$>ݸ>0\s]1Hs*Q|D(*<ݳr A:>7pspbrD?gc{cD \sz_3ed#>2ݥu9̰utnt#Zᬁbŕ$9e J$pe|)]^x9f#Dm߾LSqh")/7Y,"LPq!r(srF<ȏ߯_pIڐ@Quli =#0.Q6s!8"ksoq $ 9THHq6/ [kW?$dO܄Sgcu4E=8 B ^h+M'4 u[ca4`kM]DJӉ(Blre[Z#c 6٣iNs Aֽ/C],HԘt, j-NY} !X[dͩ5Mi5};vdh7>قI1Vͤ.A?}vs}J5>"˘&ZdڍAjn6㬱ӷ!g|oWéiMzX@Dr4.˳׮q:{(՜Akth2L.O; 64CኩTH˕{:ŗxr,v"gjrRv[%7IUٷY$fo匇΀\[;:GnoP)vFn>&D5l~+ (/W ա-5 S6qmd?E/["w +㞋x*iَ͇^7-osп{8*9??ϞuAyM#;_u:M57_61=ww@*i{y:u+-.;t8.gEȄ`>VaWrJ>yvM(9+aj *=1d>́?''Vx@ђCϫ~tD௼ 1oGRɥaR@M0Y E>0ȹ$A1022CD0e&( ib6”Jdy]b UVPA,+ېRpBTJ> (e'8f432J4+O;s*jO%+>IlL4Dns0Xƙ%'YXNɂbGI6XM!c4QSzކ-2NRwExi0SӛyeiWq6CWBpf*W3n.\[<#\xiAe}ƅ^If}ˇYSK{X&kgSPK\HMWpntp-!U:}EA;4FXDBgOB({X4]W 8E|"[DuBj#eK_GI9+fp 8KF-=jïwPB/ooRz~ҳI xG_.9{q-s3.x mO@Jc9}60%tC.fd.4笠H@atit/~bEDyq*w e]&zS_d9e:)g3Ѩњ , #D /$)О!ܕ@}\R}xYGioVJcgj?W>MW<U#K%EeiurW g`EgQǔ\"]&q@Q;J\Ik2nH?;lģzH3 p>HjUu=)pIP\!H,1=˝ `oZ4e6YWK^DZsyt@o4(3C;Cz[Qq ?$7=c%;o=ߎ(] o@^ vMA 406=RHՓ #pFFh'g%rCeS"';r(sHew obq> ZbA0ǹ7I͚ԬiL͚Ԭi55k݀_eNM8:3ks<#k TR2N"rGdڨ.=]:j(^ "\@~ޜ cUtzy5;?׊iYbLzP,⹏e)0s{z` uh'ڈϭ6/4ֿj˪R:ǚ}{9VVi'LΌh&wA:kO帶`Ě3sd^/0*kҹPF-<^|AIR9T@ӀBtB}1vdnq5. R(G8K8_pRX(y2 %z]TxwGj2~h>ɷ;?-%4^4s˹!)}LtblhSױ6Œ?~K:0d(7?cn8r%sFƯ~jkio/5E+JUљDz774މ?WDO1&nbs! 8*^v  Fwk0aSqZZ䋽4e)!x>5mڔP O 7Wbg@zS4+F\?wju4 p52=* Y>J6.|h"|]8/,Sn @ -JB^reυsAkB`^iM Ƙ}j2 ͭsJ*t :h F{U fΫƸjB>e|OA㺮,s;貪dbaՅɢ Lv@L0klb *t@%47AwX/1,@2roM4Qi39NH[&cP.y"QU}1 8 (>JgnDk/<$w AN23j@IگAuD[S(`"hB yUe>U{_"*:VDIԳ40Њ.j yAЏA;֚&`hOf+?vIM@ ҡ%]@Z1)zr D2E`O5]+WBϳ|?1Ĩ*!^H Aņ}S 3=Ov(Aг6޵5mK;VC")THeɋS[@K\$WO x[;ȕ`UYZ;g{Vd 3j [W vuΚR)NpdR"T=V,G-|a4cWjQ,0A(0* W!oMK"E቉u&'kRS}pkc:M$%wV'6 9b$1$(OhwYI`I)7#Xyb 힯y2}[LǏx6o_&c(|NgtU?q%C̦Ow6` i/ܤرTX'XK(7.Dpv%``%Fxr(J%"|US ~n%^~09UV0*Ƌ p;:^cUjw˳t͙Z61iHj$nNI.gqm]1[AuGwy4:Ml#je/0,nW >N^,H,LmBxC}p3~a*d1hP_5s5('b 㦊YGޖ= P!=KXmϋTυٖr5E1TO*T,w\47P~>28OA[ 5h/mٜl?|)@p-SB-,?93{9tؘ 9򗾝:%%oAȖ{A/z&-ɿ8BtϽ|ɿ bmrec3nga$˜gZp gᢦ¸eG[9g|BD'~u=]2=: O+8J}u5 ATƎ.r1ظ٠јd*}3i]" )GM*m?7Qn}Pꂈ+|b0RhepՏ:,yY;H7oʊ4q4cDo'?N]D޸>.ߒ [Uk:7~c7TϏ ދ7?u XKSg(pj`A iNG/H(mϾ" ρ`A!k\,>CE rd?3J/@+\kXRIA6)~b!>4SyebLOв1o7,8gqs΢HdQ\Ua]P`B+d4"`H ٞZegB(:u|O^. +bו*$;jKi9a"a5HqW'KOpTͅ*A߱O쥘z&w j ^\{CZR[jѭ̆YpSSe44C tC뺻gtj[fy%Mw̒FYHemǫ;t*2l׀+vp)NÄE ۝%"6OyДa^&+bb W+) G1đF,11LQp62/4RB/8ߔ}.ow|[\C8Yҥȩ}}ܰQ..' I n1/*m%x&IK aQN9>%E$hgObr:,0)&X'R<!.I *#ydٙ? W1ǁ{}2 {̆˧]^JBb|ic; -.+a| mQ#p9͜ CuL:(Xh 6%8_uxzfLF_^:vY]b;aYoqv ?]c/d\Z4(3POK@H[ u[<*JfJ(N1Vb#3aKq5$XXrJKx苂$"gXDHt)RD/|"`gCXJH%"% \Y W/p.ЮbRpb-ɗ{3[k|; lљo01>ǥ^׬Vπ}}=¯nZB^__W痒76{C蘁 zOjllOVjQbxrH7N2S_Xfj0|V{V0sb_w 9@_+՝3ŝ ͨYa{gwr'IœMȸ7/&#o#[j7 1l2%8!}P+">]7c5f/EG8o5A8wY6Zsce=T|A!$I"`-X?<(sG6&inPsAIB bH*8 K?17vD$N4ktjVSˎ(-ާh6= )ug%|lލ;0s\^3&i.pTiL҄_?" %mSLXÖd11?e2hTƥԋp!q#2Y=_0w4Nc6k[dOm״`7cݬLQ] %C6FFX$?cHhhg -p=̘ǯdIr4ޙY, lPg)dGH婏FE iijMĭEGޱˏpSezSEMUNq.ˎʹ5k{jZk3sA`F]3Q De7 둟 o{^~.Ņ#x6=~TH.e!mDuRPYaqј**"F bgD~"]za /3qk {9H%}B&R: _Ŕg,QBDt 8<4ns .ͨE9nRw^5U]sroN5JT'syӻ !Rw괿v8slXWbTojw:)3:I6xM-SjM=Z}P0YI';XYrt. :5 zΓ !9e|)˜cIcx`J BVXhBNT`S)YbN@w˩WL"-ЅY%fӦ:oH:Cg ֟PzAIjW &Mg] .FNm0'VzC_Ԯ7Ok"dٻ8ndW؞E/ fsŀ%GɺXOq4Zs=R23=MV}źtxlo.l~\B*@]LE@ϴ-fQQDȭ} Ҝo= kJ1om@:n&Tluͼ$*iZEq2>* GmN&";+:EfB0:X6{kCfs/H1г2鲳;P/' BiaUoQ9U1drrh٣@j۬"PVh=K\cZcf9.2 ڛё!Nn;n^ב!,!aot%) 'uÁKcz?' p-ZZft!sUCFPf_!> -S)dQTc̗vz.GQ{7%%|(|2dec;ӑ(Ze3MHݓ0DQ:"{4<6;na McfݢƂJ $ȍ,3 6o&PF@?β]Zп,(#OOL'\om.3dV|G_9\C)*|2CC%O?913Vdtt$H$#>xm1pZ+\cu Ka9^kSm%n\\jI~9fK.|bPd0/6 .i*]mc4Y&WDd(PmP+d@P8AvoWS 2VrJuWL;>cÄn/Mx~Vtye,KqPF";|Y&W*$.&I YJӰӵY5 t345˟5'Ә4Yz||mm AG}Y0?q=?Bu$,PncY{ ; vC7Wٯ]+8ϟ>O7v.s O2CB|p:qLe+"g.0h t;dUʭG&(mFYH+Y=1ryBVJ^ X\ɔi4]։dC*֛㋢VyNKr^3KWYxfjY veVsuMC 0CGIDr%c _ ->fYG;w m*ty7)Ϳ3cUtqoN~9p2p ;?OzH\p64Hې~ztWO)DqeF1 [vpxFւ,%a4ۙxs{adpΧEei.ÿc}Ohޛ9C\ F>MA!Ħ[θu㳆%*b} CGԃiwW\G_g{݁\onHD61[ܚuq_)"~\$}pqsԺ?*ykwZsM|r匤60})*xY(tvw!+po&3R-O}sfX/vGy`>3'5{[ь/h_(}ݿڇzHW_AEFsŹ>\ԑ0hOL) [_l⃓?1MvEnݍ7ŧ~C<9,_k6eݸ/?|<(\$]]*F!v:\\}9oSYc{5n8!mzJX%B$qwx*Irp=PD 7v!_@Y˸a 5;6Sl]?ޭW6٦˺@2a-/_?TJ&FRkRnTk-4-D}jAA[4: +lJ\{4Fz3{7B+Qĉ ^9nszдT&z:H#0zy[U:8 Uixby0pJj3΀vlLiY1ʓҒbF O^Ŕ"ITg&1x}JϚpeAKnL;zWZ:+Y%huy,ڀR|M%4`}u,)W*YQb{|"J>ExH--VXΊ.MK2 hvuS$.Nձ"SX╨ m +8W]AN+MUƯ<7Qܮw$3QFIͱ]6#jc뫏ȝA6mHYdr\ ¢D;tQtЧ*ofnʹEpF_F^+gn&--`l?IՋ%MlBQԂ;,v}oO$/+(Z"e&C|L'r\ff{3=Ӟ"͙=hBB#3Nx(\TZf0odZ(G&5HLarzYzb.r0T|VhMQ1Vs&mN->UԪ:hIiVz 8&4S"A9՝Q.gyX22*ۃڻ[\2#i5ΨƂal nL1#6ӴB_P׃8.2򯠨e4eԋ ;Lȑ0C_l}L{HBlz 䐍u6'k7|es#"}q`kvy>= O/d948& CZqqx36gsڀ c75I]1s*ycW݁!VZT2 &72kLrQF,lj5c܃Ao w} dh93nl!\_ڛ̢ ݝGS! F&tC45EhȔ@-{1A` 31!gzz<_>6$450̥"?{F_a^a6p'f'QȽj cm )cCw]<:# ~uU:uާF^cB9T/cs\ WYVX2bx'W8Xͫ Py5ϊ9MZbo o0 g%Lv̎o̖g(k%Pd)q9YR(urpFtW⼽Uk[ꅕ:S EQ滸M>-B-ѹ&uB<W5y4KPX5SNM$fL/t fp%7E!ʰ ԁNK]%QSĽaX8Zxc@"YYV8Y1D(\a8V>*bp!(Q[PXqIt5澮5c!V8?%PeK,Zf|.&RL6.8M+rbg +f0" G"RcUV^+,a סwCl+ K2 ٮ{h%3jfWU!O~ W*+78zUn4ka%0C+T9{ryuf٩JX SYٯ<73E:ɖ 4Q`ػS$u8ErJ$O%TTo7?8BG?jwl۫[^6$E`L\0M#y# !r7؈ZDC+ȨQ$Ӻ ЭܑL٧&b5Dj&k8HK!8a=НWB-q0U.'҄ e@H99hȴ`+[,FX IXtA:"d_ 4hQ85?:^mTGd\4ADUk4_R!FDd &{0RZ杖_ *K6=LwP9eZ?eq0> ;h+A8I9<D%v$}`i+FE3G67YlD,]`M V8*Yu̱@ R*+XVիK]={;X`g"D bZ\(-QsX)Ce>5ըJ22rm;+60ʗړ 'XP=|g_x5Ǎ({*,am3`XU4ʑ;o;T,MZT5'O|=ULK JHسFpbLqEgnoR R~j J,Գ1!E@`MqIL0)bNk#QƲDE[Fd-J*F65<91E(^Y]fHkE 䮦FVX m&4 Vr':`8Ń`sMa\_( #baCyל */mNblz%q{~']8K=j;ǃp!ZS(]TF-Qđ¥5Gʊ`ޔj0\{Q5;сS9PR;On*+N)ɭP@6TWLp""BnQ ZȘ8*2&N +/Xic zOD*(bP ` +)3)e{>ǂ'䕅[E,m[ Ͳ R]1{Ih[. !Y&cwvV"c PQH Zט"pN$*9ה}%TW)_nO㾗 c2ka #[H]2 (J;+1 N"*WY0䶘ȥ0T0T¢h?w3n%fp]qکMY)uuTe}Kf e)WˆW>eë< ؖnp4UP*7lQ6),jWV VN]+8UIK^2((Rŷ{˗jFJ7\25쬎P(9HOx/ǬE!p SS#cX* +Xrz[u^]o ;IRe~&kT4aW3JMeS($ VM%({W6h 8;-fm KZr"uWfK(qcQI~k<<ȚNzR-aV SЖ"J"(+ |?w)Ȫ0wvb0*!9~j$KA$]FDŅ^LQPR)t@s,\_*a`U Ыe{](,kwy V!")1Ct5tn0K^3Ҿ㽃&'(OaV&{󽲑;y1NNB[;bBI,@\bp/ieLB33[@mwVa+%uŁ@@\Sҟ8Un:{Ո MNg*-9TbE|aDV_񺦶*ڨLG0EW(':JƂ{2P8[+כ-,j1mn٤-M_-5Yc[nZ g)HI ga4]єWKI'^܁kDnl>ÌC`가Sz r;I>͎˯ipžR; ~Q[]δe]ݕB0Hn<8."5N`0Ж`U0\*R1GaG} `2އ 5}E58[DzM'vfz:180 $it?&=L{xO||0> ^FN]8f*ݸ`jZbĭMаZڕHy 3ǩWORRqObDr.pHWg;TKJUWJ"[-rZPFd%eU@z`.I@%FZܕj)$5Cm3f⯂gx+Vrm"*@+9(t!8b@;qAPndNෑybd`"cKtJJ N #%l3!1X@̉;DU^|!c! 0[lpu@\kA7$Rda2l`NGj]pw{9->vnc[cWQb-L B yN: D[J%F0p#Tp0(jQT5߬ f͈6 XSD=\@T!C yqd% LU+{:wF#cdQ#bhoBz8a!ET"~7M%w%ˤǥbQ2xMS 'HADIvBS#q**^Ӊ^W'=d*+VW(Ęn"!"Q F"J|!4{^A./+8 p՗pt<<%1@{Ff924Lߞ?4  XL32}uӛJo/˯=DXD 0f4ܟOot >wM~  C'vNfo摜alp,\ϼ@kp+{Ȫ/gd/LQٳ/߿Gx̗W`\Ō7/[/'֎ۇOFy9#7f۝T$s:woWj4E;)'}̷_<>^6UvNgXxax7$:"vq0WKH8Xaϭn]ܵ׃'mz("tf_M$ApD,l'5i Œ?HYc۞ϮKoC3(fKzO&{={{yx9c!Rwws7.I=X>WO{:WaG30?yыn۫an9z9 rO,|i}<(=r9ٷ~ N@F6c=lmqy&X;8s[@`8<+ l!ݳ߯)߽C&/i k##.>JK__A?<Chg\m:l!ELMxǜ~rbVE k4)Imq˒88 c~< 3촆o_moyFVD c_jeR*Hu)|| HL(bcnypUaP2vf^|hcqu kоޫk5Wc}5Wh] ʵJ>u=4MNp\ky{˽N&aIKRa"+{OpFr7zA|)s)dr-竜L%҈BT+LWfNYw!:X  "F̱u[1H;%)!5AxJ`hīGe&#И И >3a^A!.TZ&_F_^k'h+:: c$&*鎥2X[p.KCQ˥~]4;_/M5l! KcJOS.)KR-+0&1.}pvZ7va?& ێ Ѱ Ѱ Ѱקa/~/*-OXR)H"\H1R'c+dK/vtV&B !ϾUJ.H.Fgc6T8IW)è )ڼLeu8\,c/DW\Cd/x:4d3UƷJ rDUV)S F]_C#ݴکGWlMGVA|݁ l|T0_Sަ;P}$Լj\G.c,R^XR0.՚gWWЄrE0VƬPTN1Й8B-ۑs:I9IgJRI,;3bN E媳l] 8hyBT~0BCuC>u1c:&,u*8d6MK&< "22Y(` !1V(\ZII9'JX7)Sq X0ft0|5l]>\Q{dEHJ ~:1x"  lI$1J3S0yeЈH)Sb]a%߈_ֹ~•< S;JXAwe4(mXs `43 ́}6ez]/FsT(Cw r4VLxB%qA+""mwxild@J\UW*A;wmj ^-J cW:.Q f%v^p#A=p1h j3QП ?d6ThNJuu8͔c_*^%J5u &]`]R.@-e`SXAhQtS*);Ϧ*Y:AXu8UJlF-Sي(Il(MIQ0ΪDVшj]"zz?Iz?ITV t%f#R%+}]o̱k]X)CӴR$KdAKP{JVM Ef-:$も8h2'#bΩ5sؑͩ(:HFS_A+nlOb(ob[R$IPb~l^J3Ղ4Wq~Xo]z!ߋ_ob;J@Q*}c,]li(Řr z} ._{T©킗*6嵣h}.כR= {]WBŽNmPƛd}TqO֚9fѺ'3RcN鵌֘dNv!ƂܸxpxxK5"%hO_OG}6qv|:3XɪұSgl.|}bUD%O|}N Ȍ֠sK/ CeZE;93LM>{wƀu4'uGO,1#\M5VEz=J:h!gBp4-|n#J~r&kf)}a#DO6~ndX-L%}5kip$6jR隁CD _ƱOcl@&RtPo:к&$6bɒmGlWZ4{JuYM7>ľ1F:_JT WuMo&YVu<:͘0Q~Lb5^/h(?ϊd!?o1µ gKf2Mq#޲Y?M:uTyМkDR$k& \lZ"[^hj=*U@抯# D2kR,81aZ.l :O'O_I>'F;i%I^& $iRw^,Dp句7vGܟ7޸Ț-c+rGGoxsEKyzWG_<yTCzݛ?_yx˃_^?^f~?{{ͻʺg>]}w5A_to\p}CzNHcrc80:i"qq0z{ic:Lv#AV} :(5{(q:kݙN}>~fFO =ݜW?ƓiV`h>Ͷ_ Ğ7b1>|6ؚ^ε?7֯0{f ]mp>.k'|=F bz Bg_T\{g vLg_6@Z+GF\`!}Pb1|:%и -|"tsB vLk?VySgm:U s&+r+s-d\ͪ\Tiq& Tl.]Ӱ?խ5WP[ƋkX~ 55FT>04'GQ"TdOlC y( [\9U]Ek'@+.F8+foDJ/5+W6[H1YΕ )vKNjocBMh}P3j͜b3@jΜ<*;RD $^)eH2F֥S4evOm>Ҝ^OREM*ࠛ-^e禫A])$a?3sGi-Q_A*dq S^&Hr9A9AruR"$UkXDc3f5 b16pdsSL>&?Avrhn&O.{Vzeo1"@m.lM>ҠYV|:*Vlc3eb_tQDn$-֦vN3.qO_d;EafX! c>$0gTv$NxzkzC31A'U+$A8e[B9C%HkB*eʍ1k,awm=;xypiƁ>Ń^kDIvYɶ"Rmv?r8G+oBr=3Qv~=ʌw#ߣr'!PϙaAj42?楎Zrҳ TrI$H *JreJ#N2Jd01%X}spӔ3X9(wcR'K*1Qn #jƢQ,H A]HBdV!4.˘! H \LJJ6{$H6a$;.R#KK:^(%ƽ7j< 6AKM VErR!Sbl3ŖUFx e䑡G,a@p%JQ!LH J/)]F t5>]@MHn 8ɠx[)TJ*.87Ix2LQl^%>XCS"RJ!NJEdd6/2m݁  !Ibdʶ;NG+c޹Skz~tjFÃ8d)Ƅ-<%ke)1}G|~#XsIi6V3f %O,lk7t8>ޠ7~wWseM$C0uXحhLKD֣j\Qq(V ?LkiMޣۼmS $J);$7NܨBbCYa,h0$('+<5'\)CH(w+&I #')XJ(Z̬&ʍ. $k>UeaFh8腻V٧CEC/lISiet_cF3$H L< , ×+1jGJagn iSV&DCzu$baä 6J $0A| `mS#1]Y؄{7-[_KQW0>p,(urIP=' !OЛc@XCl2V`bC>}^ ?Q뱵6UEFL߻&_deA!hAd'R1&F$YbN-H+1Bb{I;䙃H:N3o).\[gum)^*{,|22Fn#J-W3\ i-$km7D CĶfd UP07쾲%W:ޟ/?9hW9 R4r%S2YE9qH%pIY:4jLTV(= p6;̜s]0Zٚ^.sz{^Ӗe\?LO\MR`̚=*4PɆGEo3b4*K4QS"`cyIMcz:,֪ *=<zc3+?ub5_oLAj>rCHC1q3k9qLJ' GcCť֙!MB9"n~McVoPPp-@A~/ )i7aK4Ni "@zJ㚷MmJjRmR)iq֬mHz 4TPm66J70.#VIGR,YJCo&لG쉪m{I$eY6 Jo&D=r&'?O泞},`mVUɇ?c;KWGS)tP9XDAѵ樛 TI{#R>B"pmTe^qBzL%YNZ \eHNBOG%Dp:2t:<^gy ,~p XFaS Vaݗ`gfv"ecZ.\4f=jZ!j$ ?,sd[8uEC,K0#ΩUkI) M"+̉XdBM VZe_$RDP>mm'[X;e`[#+SK\6pͬAUYM8o2R #\6qh oA\Mɵ Գ@{ٖk4s<_?Y䰑o5hIV!F ?i!e5BmT%(d2-Rpc ;lxkn0e taَ<1b! @Hd r{R$"˛Y%F -&K孲0`4?Yq:Z o۴PtXrۆ6Qpd/G`םTlVڒiR%Y:+Jh7Aʙ(yl6RIAHM$knI&Q)E[B%y-ɕߑlQ(3H <COhxc ZHt@R'nluk暷 mZcW=eR-X"ȕ"V;2k1䐔FC)Me H(yF&[pQjb;#{W&8.!da&T[YhM6A&Vd2MFid JRސd!MoxЦeu26(@+v|TeN/Y CJɑ"CFB 6<1g 6e/'b}36y_WoZ)潩,f& e"tB3Y-yKYɕ/6 ^]gE Onъ}eWꜥ8.:\",kVmKIu(a]p''4ۇ{V5qt㝧eg)sL@axz1۬8NAYcRU}ߣiXq&nGU1*D\U <9瘍hM^Z e2/;=T8hLۨl~T-L?ޑ.7q~s.k[_Ŝݧ$x~;vގ~re380٩w':)AqD>2t՞]$1]_$/hB59Fj=?|O^?;zO?K?'Yu8ᓃ^Ξ_z޼<<~vt0{B'=?|w?ts?ы=}뻟{i8}<9Iorsyo< g;J~ڹoj>A 42}=I'E}4rnn7prg Һ%/⟿/{Tdw/}ӾٸzM}\ډ@aruyd?(,ַtwߞѷ4g.5$5U}uf mm]=Lh;Oot䖽,jku2ڟ?,F~ׯ_&~< ͗S2Cz|_z|~q8"[K)խXɛϯ|5 dp쫟A껋W>׻%7<\}I~iȽ:{; HF}]P jN0"y3O8ɻWL\ɵ罜ExRBLNnUvk!{bY~1$OTBn.OOݩ3H=Gч Mq:gUSg'iD)ۯY9=4OT>_p\:jkx~ NIB=_ȴ  K˽g>?iՄDvGfǯ_kCkW~MCh5״3 3xQzd:\jaϟNyrv_M,0Xy,BXSMìK@hVҖ W\6ˏʛ8JYWʸ$ou@%Y/[9k0tn"[!TjdhX?7UT!ٵoW +(CHsڸԚbn?~jQxKD*t7\qj]=ƌlzU,mR, qZs!D\Rm(nePF%ʺn؂.ι{ ip-ҽD ʔƈ-Qu;Vt%ZLH mnZP-Kv/{ db[صnC nM{t=.?4.ד!B o@`.{JʐmF.,B̕*)E*dPKCȆx)??CCCCC/?d4:;`υmL㙔,tPUٞ"gQx_;ō`-CUz{ p.(zd_Q.p[)D^Y`Lkm#G/=\| _&c7_|&:Ϗdf_%۲ԲMuKqIldՏŪb=L0, E%0 IӘ@ Z}@ [A߭Vw+;+hSE鼻HB:63: !BTN+֨Z >V|Chlmͦ+|Y/3U[Wn{8*Eͩe"&ȬUZ%&b~{88TIn;[w{'9p$ <8IQkTI8eD:m"Fu.?D[WzU]T< nQ mav͇2hNCѬJīgy$.R &67Uc*!J&!ؔrQX/*Jf>lyt!dk> [NGmAIbS^Ʃ!Ujmm&EJ Zd >ASm$7G4xNȫVk^ևm+e~MڒʵcVZElw =E{xm55!_% 7DL流FC=v}Rʼnx/:ew 7$aRrL:0(u95hB[Y&8IJXjU%}AE jW 1P3@_Ӫ FE ڣ$,j#[4 xԩdWE$"D(!k0P_{7 V<['.Oht "g,@/DPh߈  2(a.z[cn-sCfeB,|qA7dRW,6nzrXKd h-t҈ƈBamQp3[zc9AHQ*XHvzX=u8PzΟR-uڞh9eKdjZPGଅZOJ*4)m2QEܠ'1WC%02a^ (w.Z\NRp>`Guve S4oӄ =8*Jƺ A>DZ4h~(z6o[m;- 6`:&yy5<O@hLߘ$FΫ\H F E[mBm`:*T$[Dnl8%m7 x8jt$)*}cE176ׅW@qm@րhjѬx.){VG P7@=P>cUQ݇}k/هGmᙸc}mM*f4wP=ډ6-͜9k =~Wc+6/4JQ8AVpݡ|B Gg3([m;5TedS{F6M]Cc -p& ߂;czcDr p >ˆJtgw:7Ӵ wJ#* 폻;vSk4ZCK@-Tt2IOQrpU:>znٯfw;H]\޼> чW+2ѫY8g9{D}q` ѲH!y)iav_HmIx}tGdO!wH[mᢱܫF( (iBv%$:߼fu95yd\g+_/k+0YOwIDN%r-UKd5є z)8bI(-dD _AHiMpST;PUSq.Ҽ!ESReX4yZbAg~J'i/2qDU&. XJVXq"mju,?P IDJ|IA>I%I@Eg.Gg(D'AjZHj:--0vo LGKLݵHAC0K90TIJI?9I )[%DSu6|ʗ@qDRW3רu~S/6׌Wg:l|̗rqm ݈Ps`hMY%3 ;h2GIV,6'8ai$KH=la۰%(6ƞ`hna}P6ȿxhFeJ7s,e"^0Fe <O6ʽEF<`.dB\P'vT h筇hJ<@Hw򤡒0A\wu>koZChxcѰ*=E C=#%ʄ9,!*PJwDg> d15b;|Pf["u;r5R& J!xaSPx"pM*S1ϨU(µhNЊM4͐ӪsQkFn (V o (n+g: )f QI:Zp;0@kXHre@n1d{oHѶ㝨ͩoԶv-DB BWB$6̦>RiمOC8S @$!;v4Ygaa{kTZ=8>^+(l'F%*K"S(cePS,+f>- 7j&Yȩϰ4sM{ Ty]Sa<&=Vie[*+3W{ ޕO"aEMW2E?or0B_zZV!ŁNۆ{|Z–0`{oM2ɣڋoԇ\1("*d DE <) OYܮm>A7>FCC!mlֈ NhsQ[;^g;y/c+ziwޖYmH}n04l; [zU|z):#NWՉ\QzHN INt#FEM+nƽALU5Uv鹺}ڮ>2̩U12]r;Vr㬊h$Hl{p=G&^yLZX̴%nꇋ/;"Rۛx/%e 炮eq>|}ա>}zl񆿜쨊[h/޷tsNF=. sAse;c :V3kJzNVm)6N7yTi!iܦh&+&jEPٓTDnD$)ňb&܎xW"TuYu.f TzZ#Qmm}=٨ܗiQ6v͟е,",˼Aeqc,f*<\G]<7R~f'!o?VaywWxⵇf޳~ >ެrq#|”FoJ7GVKԎGQD12t^htBqVaJ874 ޣtA tj#hE OLU/T!8D0 1rt1Om,.~*;6rvqut.廌\/>sn 6Yf {ݭZFAWɐrk*JAROjSaR?q:0M 鞻=&5$T$\a#(-a@ >f !rʼ򴎄Z7CVsvz?v۪W7|qRRu>Q#TKpZ򹕐7CzVQ5Y9"2oIVlZDHGF<'#eYE`=CzAi8zhpZxzr(wOcW=8j&*_XѯE4~z&& z%bmW=# #皣6mM@ۇc~h秫 'Ùjcٹ򔃿  t[SͱWj L.R&/+_QO+W!!iV;ĝRWq$_)t~"2wrHn#\ZEO}-SOåIi\ƮO=8*5_i:.x*"`V'?zSAy߆jDĤhъ0կTՋЯumzӐ=D$2I34ёHǸM%KH4ˈI 4YGEwzͺUY͖n:V{̗J)H8E| =Yi=4*C.^zG(Dd ܒ猱%{Z7볆Փx^4(IOj橜.^*BqVa ޔnHCn4(N}/U|Ɩn -n@7*LivtЃӯAt*%gE Oty -n;@75LA!rcp)Qpe秢Jް% ),8ݍsqG nl͍\9t2gn"Iz LI!(d$QS$ Obp@>%JRͬdU6З2_#8 ؞yQ+4Eh(nLk\y`F6hj0`Yjخ>3D[K8]h.Q4=;\26京\a*t!V2Թ7|̾.;SebQ]&;f_⥡.$sKmLJ퍣 Zbi1``B{Ui`fC['Mg?CՉ71k`30^UMVD?'K!g:Wz kɁs [dPЏH=tL*iHrL ŻIum:Lj.TL,We;}׀Mdj{y cKe 5T"L8]Ȣ%.Q/ IN@9$=x!Oܾ tUWfennn?OO^CVA=`O~}+Fxfp>AP7wO&Ӯjkg.q*>OL>Nك>>TdwUVܕ|zgx9 w=j? 7OIZ 6ּuT ;Vf$ fF<â ;ok<'d!95x!>Ol$aP%!-?UUL ~atp_wux1+VV̟K[:)rsf,J[uUYʪA{;' &woj֫yzAuȭ pv8i3X[Z8c%c{B k6 Q _iælf)sA;u[+6 R_+qkQEާ0iA1ShxFI"һ jg@KP_%)+[WU *s# +-]H #lL0 ,$~M\02s?mľ]&V3c݉ Ξq~"YP-(/?y.{kdpzpݼ[n^{yݻkxʹ(p%af)YE+hՒ%=/a KjcGC]VYkXrZYrZP$)Kjw=ڒ,Q4-IjG[Tq>4hd>Xe V mӰj|0؂0I"fTkɿ{3/.{D'z^n Պ[fh_4E@6kA;Q䆾%"76,/OZWoBbmw/4ieҠ+LnSfTYie&Ea$9 xjcZﺘL{-^ kNQ,Z]B?l*9ۙ2"2sE7{zjMy#Htě 恁xpW{C0[,Ꮡ?ޯy~|VuRd~nHdcխxAZZk@BȞۗ5XI{ZYKaSVxz 4ctRN,a:Qsb8#.lisεˡ( 0vEcɨR"sA ћYݮlh *Qpd֖֛q). Ǡ*?4c,+BQ`U+0FfP&jOR%TZґN>m+)E bL{ѹ`dFgkmFg:DmNo>"xJoMC ^k3 7>( S3b(WZZqlLC]gj~-Z=IG bMu|Rqq|Djԇ{W܈[~E>z\ӽsq=7(g:X?QrFZot4U+TcmdpGӚPȡ*)$EL?c`Dr ڭ-1Mۀޝ`>#R#V2vİ8fi&mSPoݱvkRHȑ2ŮbQj.ݬz+?g/ֿ ]v hj B!b2Eg=&*5WҦ˸Vo NsQgtEy†$"/€MCT@eǛкJ)7'zA"ҷ 4rv+qmWWFs6}lѵġزUC %Œfoِ%RO,X&oC~|ٟt&nRr](=1C̷'~t,tqW1{!չ'eg9{+YHwVf/R@N뇻; 0iEff4-/<\bFm bKdJ(uPTɈ& Ȁ -HVtSQuD}&E7P@h 9Ȇ?*%==)o(Rb3C{Xaޚ^Dqf;Q5͙ܰgl41F'ԙ z8i/!T9hUdXË*;0]D}ɚǼjQP2: řmXƗ'˼ wW3ؓ6)eNzIS=x jw^<8nxb=wz}KGzAu.V S K){I$;{N6vhe &r 6u~s̟DQҵ Υoi~+=1kY8o᷐Mw4[-x}e_=N\95p p C .+x3^PRɬ!*mR"2QºHpJaa~ akgoC4;{SZ>W:< 5wYWe-${Q/8\pleuPcFp(Rbi P82G c XZ[J]pW~Dq/_֚.8g\$vb[o 'qKV:5AkúAoŖ-CA]A/8Гi~sڀLTrxdEY}iC#1&teWhߟtABn iB!)6ދdkTs2EuK2& "UtY!Y/6cL_՗?Qf\XǴE<~5i?s^7Yody[?19& u|s XMa}C2I.*R)t hT,g\}BåC TxG7 C0 w nbY(眥̄KCf7@+JX{dM!ƼjweF\ř 8HV/'C 3$gcI[sEvVrRM[q4pq!Np1/gHN6J?SQ6$X+uA"+%meM+>о?5BG$6\(· |!p]ɝ4!)(LBd#.mxڈ5d%ǜ_ŚWŌ>䛺mQz5,+Dpo.7$ǭqAUSCIh0 o,ČHN6J ŇPFOKd)ӈJ.C"eH @DzmQNTҒ8氘I:E-/"/.HR/jU{1'&XmEbRbb\=Ȍ'>րtdTCK򢸚>&k0^/U[ЋjLtylG]t_So_D{+ NIvmuʵXǔ?^O+( w-3'=+nE*Il۽%.ve/ĮtnD+^kRů80̽]5ޣ~9 7Ǘ2_oGސ(L#qa91mrb\7HD"۾G{1)&A|֐{ .#23ܣ 'lO=_;`,+ɮn]wo?eF'УE'랅'4=/]Y#Ԁx}lxD>l~=C+lV̾2{«?$ǬW<О}*N?!qgٕS:0AUy +ƻ*]<ł֠Mk>~s@N!OQ֦si?t+OE+'YZU)2) ]u% [+N)z鄹2r2 f}:oC8ߩMfKO}n7| cGt:f.{"TkZ +y >lP>]ՐkDi>E0F!SwD!2x[,uub;1NT-QRvP\H0pz#(0R yFUpz kѥ2&n3P*"?K6b!dbyر>xs H #6fZl sOFm`E`%|&flxǁGi<ƉʎlaڕFr RG$+둷[NujM2#+=xFA#+:G^J6l#x6.{)8cFɈTvwg|;W}MT:w[Ñ=;a*Ah5|\n:H 46G78$ΪOM+QFY*}e8:H=1e ]}Lf} EPDXzElGS#kSq$ձsOHZղq TARPhvHÒb:5DrZUI5`JnrU]٩t=njBf0A.FA5k^WZلp@F#SW.ZyۄdaQWʲm?Y^n${-UsTm8xy hS iwxy=;+ok.ߜ>w%w(22 R0mP:#awOg1 UU2j(8xWY!PPh)'lRHi3M"N"2ȣ(cR W" SCXB\# ӭ>sȈZ۷oͽǎeh63Rk<x H;=VLkaM9Z:;?P> @PgV[._>f6Q69aUwy-uΣ tͯ;P'XDWw#OEw+xs@vKWޯIs'ت[W;NP w>=" d9m!ǏĆc%0$Py8"aDmn]Bl0%t.*˗#5I[CD\=Aƅ$z;l:2G4LMA  8_MQ8ҧȡ> ]e:f &ԟy Ŧ0~H5%DeL1[ΛjŠ%ZgB˱%dݽ 01ScF;-!A֞l i Au5X#wyZ\.p[>I2gdE}̐JR+f%%CVrW^|?nRJlΡ%oJJ(P)R)Q@ɚu*N}YK*\+tR:qO ((pmf7_.hc95SwNl0мmhy|5.;ngQ ]f9rьq d +fipoetpytkĂX4*-,~|{9,~)@ jLSsGw<ťU| r^ۉn3` hCR 2{Ksؔ7mE>BP# txKxMQDOz{sٌ-Om Jan[A [^_u x5Y;b^ 3] AeGA4"! b%DZ9mx~V?p(#.➀1u궩׉||0;\/d R>t@ ]IDS6ZfD~̉5-:vZZ:wO% 7_P]/NV4+ΕH7ma'GQBmW'z  [܏XQZ[ Ԕ8;kfce|7⥄xXFvV#]J`ZevIfhG*<2K8,BY:}q$/Lk+z)xs21w :<Iz8m>{Hƪ4i9תbL(9G\^7mG-Iy~al5"̤'Et%8Q+]U]͆:U2DX@D+b!{!m\0X"\f|;3O~+7n9!Xtar;!` `Ca0N(}ĩs.q2Ӊzʼn3ycm3P'lqim3”kbGTyQ![m:!݌΁D&[p}Ž8?tGq-bЗE`no+3ǵUJyW$b(Ε[5;\sѭ"l\BdI1bDH(]JVəuA߭f'JNj@Y;y27NlC B2G2Vr?^l nM R ZJ ԙiM]l z^@t C^18򩳣e(f f^W5Fl+6Pg|oBXsHÙu*jy֪V5HX޹}#OETQ g>,XJ][XRMF\sGO3!a vʚ5RHV4HHb ,ucXSz>̆!pmꃯޫT(VltӮ,WV^ن3wvtϷ0mwG6{0xvӋҍ?z@;c et:քJ8깖OkƗRVS6RfFڔSRs(V;4V>aoUi\CTkx8Q5]up7.3']As\ӽ:C[JQI"E\ m̅rx nJS_LE6QUt Gʔ7>NT,4^`Q(ZHy^ $φ gM.Gor9zћ\NvLo_n!֩Ƙe*%S\6r0kL'#`1[C^`e"Ob(A-ǴvkM%،i=-?e JCqE c1,b?K*u#4 ݒH Ʌ f`|⊔oñnR/&͙l:eT=guњ L|O>/(CW ?4Qug?;t!} T`GVMcp4'm<.N8x *D$+c}ëTt 3yZx sAɖ,B(`ލn\mj> LQN}x^fl3gݩOSppë4ܿl8~^w?xW5:_χp#weoM?7үWN:Jo_}(~ i߅6%kZ6rջGzb`f` NJމ롚6IZFwc[o)7ʸut}Z:nZ7\vE~=>T%^~/?zGNjO[LMsrN6Qf<_J0 hYl7ܿp'Y7wσ@O#/S|YL;e]W_'=±.Ȕe_ƣYG?{&Ux>?.z/O$\GcI=?NuQ@d|g_/ޝU +5]Lĩ_͋4#9)]w]Zujv;v$pܝ|>i_.KdFz(]-jiR:D`#zus\v~ S^`e!?[s5.XeͪRV{ZS+8ŦN]sKǓzȣ毳%6ryZs*XlmQ^L܄K/#»_ _HC_:(JwmTRR[Mm+fdK$gm )kć4Ԑe*(t}= 'Jk}u,qw,H q)0m)$M͏!e!RDZD`RBm*:cVIF;?І5@:/Ι.հ+fm,F e6x>}[pr<tv8jc>_P n9ziWiWWSۜ/Mn< 5)o3"ݶVoy׺SW/z)_8g[4i%|Kp5ysj:?e@vu6xUskZɏIrv>_|__}߽g~SB3bc1XӘq|)a/ȿ8}c?K*Z5Vck2 psL M']&>y.h6O_}_}8LTk-mj͇CRnڛn~('2jLJEtvIO4B{R'weFLְ峯k@Mz }IS]nwCJBhޞvJFYMJ@cUP a8ecb,6RGt+d9޾k aČCke{ڏl =ǥ>6 ѱ c‰F"_7qv8qPɶ&N!o:^vՐ77uA-ݵnw2B$h~4gcw@9hW2nl|tt]56=L->0x;QՔ[x(xQ }l@鋡QdsO6"6aL,+{ cWhƱEDcf0fX)O℡2^nv0%H{Oo=͵M,ƅXLtioFG`zUc6]n(ʑ̎MԦJf:Q l{B 'D{Bԗ2C5}5vWn&BhJ}>C !b\nšd'jD9ضpiOQw%jhOh[pdJ TkgJ$Vś(Klx^ ˰iIp#[Jg$/ -:I.6R@S=%K RfDf,u)Rh߿nԀBl a 9cbX]UX+'L{%О =96{6gC{6 ", z{ABi:١ 6T eC}Fr^!T⡰V(Z_|sd+Z{uYdrt^.~!h5{l7aƸ5v|=l 'vՋu1j`EDI5kBT.@A](a3|兿|$B!)(E(Pȁ|!-ӍXU&]lHJX]( hzJhMɍBZRKSNb&%kxԌh X/gr[cIu.IŶFGp c$uZ!lt:'n0Q.,7˯$~:œ8fGs:h[R7_n[oI&g*TDk^G1e(BʑH([0!kN7AI6>@>HwEDo[^֎=v9eJ~W>$l>9,C |J]gṧ*7$Ef^O1 M[>64H>6詍ur}S[-'NC(+Jof FRspO.: >N>#~ yt(o>ԝ]+-D|˃2mo>2!+:Q[\zcTʱ+hcboFS !ti!&<_}IGr*BǗ\H\aĨR٤I$ hU9k@nfr&QF "#&O,@"9 X$QxTIP) ْ(2_ab`z 7BgH(,gBUԲř^1^RNfm k DZ1Feu1u\L857W;m>#o45=M7rCa GBwn嚑iKS]p"SNϙp^N,vMmY M'|g7!+mkB;Puv̊4XЊY4$4JWZ`z2c.,sl+tFCLKZ_{fnoht%mӯ| JC-Fx60 J&j=c9z$7ۑ o)cAD_ bg׆z\tȯkp݂1JmeYe7- ̝1e+`iLj'r sQ1nR0Yd*9l'H*cI>r֎sL#fp`rkXJ4d!ÝQ2/TI:CV%TMhr,^w2(59fZn0,2y(55PcrFQi|l1'7H7sЇ*#!R 7sjT3_%Ba5o]PU)&Q7 j"QPLP-5v *$=Jj:p$a^ԘE )7I[NΓTV(*$@,lXM]$jdK5=EG2ճx; ) t&.3[lI O)*յH .0Z%a/g'=LWr:Wݠ -Q]'Fݘs >!6A)"MLQj1cƊKJ*jw ׂJCQ@BIbfyX<"@Y*ٰ-X}Cȡ$p&^ Z d%+ lc.FVY= 9X~g- vwƕnVcI\ j«ˣI:T*q6ъ-L~6 fWF;.avm0 fTaz8_!rgP}j~XAp6 N/ BKHʹ,O7q3C#awWunY;*C1f ~vTLc7k- X(54 #僦B,VȆpeZ 1w5F% iI$Fa(UF^rn$'Nj \iwAM)KM6k6$PܔZ%#uM"lԚƸפ@kFT4j FZr6G@i : .];0]ЙDs5F#.tW#wןM?}ML& Hn;t@x'Λ]?b![AXsb4X07Q'YDJ$Bp:0E2" Xm=)>k&Ѷm` -8 ۖk _-|Cc>~%RX\7e2:51*['&Ԝ54Q!噋rAf6 0T4D;B=K 2QѠMZ)tDg ڣ {#F48蛟w'Ӡ_܅}ۨZi |يB&+^}2'X9QI}\\σ7ypjߟMnx$lqU-Y!!%j kqmǛ?&&Z' $TB. PV$j|6XRDeވI1Aur1A=uz12zWXcy#f Mysg)_~_!Gkc@ʝdCZ줘?m˓R˜%L2Y4RJ4Bi[l,?:5䮑Sl`w0kf9:^Vhͳbrn&"}XՁ%F?_/FE-lbj]^G9lPGK[n珦z/v6 `j =uce|F1=M~o;)8g@y*%f. ǓJ1K ."u(ÄU(|ʗT( \ڤ|Vo1D`gkfW(#iSһ)3ؑ?5OWM>R'F=|%ytU::9aՄ8ĭxu;|\\|VvE|sN1v+٣p?ViizQ:SՔ w.ȷ>NyܐTTut׼(6W%C/6d}ITu9>-C9 *̸:a^`Sz4_ry_]ݔS*Aߜ'AT΍^{wiء`|}/z7_j^=TzY>^HP fLG]-Kƺ:9[_djdV,)6_:ߒ=łTE0^O[eh#Pj.X"}y DT8䳯r!AˏA8 ]e@Caˢ".]~L#ekE;5'ܧ 5Y@'c=)c*>uq#!=]~|e vqk: Dc7zdLIAD"H"fFQjy\P'Ob)@d;d%TQT͗Wad,]F++'TNsQ|-ϚuU7/-k 狛W筱۷嫽-8h#Q)%QHp΋ۋ|jb 2\Jg1(_f \"R|?k>)!ܰ߱wo??[)&oƱ{n#L'VBG%se<Ηw$C 'T=zwԐr! Fv2jn6 !8% @%Eɼc5kx@T 'Gcz6NǼweP9D:Դ9Hs#R̈́uzTOy` hX|N6+/ C\Pe~|e/G3>L?_ "2b8ϓfѥCLt'~_1+/rEVG@CtS0 \I 8='Amrp2Xbf4EU}>jn<)߰hNj𡼁r @aT %⿦圃駸\֓ח(:{OSip}y1lzu/η*g1l+g7Xfck=DPŶZO00C.ZG;~{T "*q30 4WN1967a'<7pHU9qMo ThTqeEAhό <@L ႏ19ϵ\Y(iǨȅVs ԷTTشXZ m!pJp+e1$#l2n✑u$\,hL¤h\\';jdsX^wр"S6i|;HJ$x⼉R (:uP}Չ$d[W^[[IV映[Y{m-ӌ0١ZѪVr o'X Z/U^D3ȝv hArz>R'T v~(rOxu9>PuTq'/. ˓E[: Y..,0) AjGk-0zdFhB5zm˞[9Byu9JXߙSyncN1?Eg@ȫO;ko|^Mc#џWaԐk5F-6sp\3%1+.zFa^iqZ7up%Jrt.5w :Q9蠳R@/8IFA DcR8D*ǣ$6iw+wxkV#+'/Id~?>jJh. b 2)5fdJ*EW͢hjZj}%nT`]BRJ#"WV'YDJBp:dO^$! , TSUP~L5 գϓ N~.H㛫@A.=,W>YABfUv !N ids',(dvr7t${r\[TU:$B"9+LWCl>dK&DUl$)β Tk<uFPvƱ|(!(g o. 6 NWz)EĞ_j!Ejv6݉x#G7ؠ }?oRr! 7MC|xI&$Eƛ ;g04`JM/<zR`-pcÍ 8b;w#1v;XHfLn~d4IaD~\T51=0^Lv?;m"<`6$`ί9^1.ut87|'1&y/xMn@")aYE&g*xLJQG&+YOFc kGpxt~[pZG{ܾ؜ȼ, nÛ70v/>1}*2nHINo25F\ qwVT^.p\j!*n} Ȇ K$- N.йP}Nj%0ϜMqKHKLD:k뱜AGλ>dٟi~KJU?r:z1fVWC0ݳ eeQ f~HX+d9$Oy@~)!u4C)K[1IzD#c,䞒{R}{2}Y6@ʬ`.oܹU,Z6Gdg C!dEé#;CW !G/#?J?ngtync4s - \FV5a{PXևf@VeO8By(>1>)N ߽ϯ2M_+ZrEp{h]w[%ՖwB3P \ IH斪f%4c%/`M۫ \o:\AvHbu@i5ffOWr //2> qb|@P&VC=!y(gڣdVY8׹| _2%WΧC0"o z۵KԀaUbiҽz)6?:n~{w_+`cWd,fNSQR2VyTgZbqͩbs=_w(G4 ZMq9 6x,". F6Sן'ߢ2VUqpd>;^P0S괋aTI) ) x*)]J+sV͠}%U2>L?izEvছkd?-n|'$߶D{pİ+4D76&dlOv|CTHʄ `&%(h4BaLre*5>6~\uW\BR)5}Qgyb%C&~җH˗20DTIꋕ?<͢`fOfꍃ|yAָ=әuNm\rmV,kK`2W\6kOA =׍h$I%jrɩ$+BZPg.hw}"]B+79TQ qG~ΝţHW< x ~;b-I^AV?^?tʡ {jSLKGOޗ5e4@yF% qjz1q/8xni?F)?Y{ { i†b"|Ŕ!CE\PI Ie`{ xnCУDZ$>{uhǭЈg6u܂59szѓ:^p S;ᰅ%%̻Km]lӛu)#Xh"1/)0(o#nrO77@!8F(h@0&࿘am"pBGn0xD`Rx{"Fع plv ޞuؽf3Qk|@sNu(HxC!61ӱ1?Z0QIj+h4m+b ,?yטl8&B\Q!8"R (eB_~ ݐWEx<6aM.ev&a}(~?,j&b8@IĴ,]=,zvYmeD߆`Be 6kk4KƉC4Nsnk+nt,z T,qI%zo %xJ!rY!}/HHci$, s2-d!xc& eLuua1ra4Pf$Dž:q#I2p5,zd|:Km;xg H>$7~OFj]YN\-+?{_G_fhnp+O2xL𸎥 YDCF6^_+w&1]j',};̲Wۯo3Mԑ&:@+ปDчN7cDHY]q!zVAV02SۏCZ[Q\>H`*r3}6ϳ9m7C P?}7Eӈ}?HNjMB D`b:MG^|OG>Aǡ ,RyeԠK ;`#hRnzۉzMNJ`V1QӜ)rWjKiS*`G&r-2^vvRKk5@gm 0ĐOWe)8Zv&'  ƜI}r`/3}NŚrcvA65.˚ SR 8CIVc2Cc#2MB9.a021Vϖ"_rsI۷vVK4w&l| v76ш'iJk6;^~,QIEc819@Q #pқ*R6(g鬒/OzV*q/7o]u0V,V0Ce֜U{(&rjź+ª=p mڃh`\TW8m#u1 ۖ h@m+4HgAbPEM%t K}G:~۳tlCGt~8R*u+g+a)uk±#bZ\pG8$ Պ1>P̴҂SQ"MU &*W%zifϾVܡxX3.!\qJE*s]mo7+ѲHY4Op $~W[-'~ы[a{zF:[MbSEvՖ)}B5_f;ױ2=@E6PO]'1TG,t#NOd,-t :䰰x2bnanr{ּl^P>?w%%l 2Bf̼M 'J^4Z A(<@J-#>+9qxΆvg$-cV3sP]t:~OhtRm Ђq߁%#%:$?1졯3|ARJH#2(1 8J -6ߍ&j 1/XLHyňYVDːPnė5L+ `t(Z%Z-m]wwzZ|QO/ix]H$ Z'@%H )?mu$m <ij|k;ҵv1g>~Yy6X=?_޾=L݃ư:wsThG(C;b!)Kl7 Z- EvE]tQk] k M5 V֖x.Z6#DN0XdKt(VɐMC@]wPPS֔5%.jMaM~7g/")ENNCdj gP(Ho֏ށwmie=SN֨åY 7ڍ: xJ!K|Q5K\ypfƌ3lMp*X$9;ro9ڡ7aPkƬw.c;l%A'p+]-(\:J?d.jf.jbuC,bD1swE)* E"QI 6t@]*;~PJӹ [Y˸t6l>syuya@r{1^ޭ2VD,kR$R`Eڗ 1(:JtY?聀LllO٬[\=~}9L!0w|7euklb`[]i5(+2n#EBQ%"h"Gl6i#QIKQ1T',& V}WvmnTG{1:"v[`1o6~QILJ [L;YCDʮ/H'* m4m,EDY HlXM:~7}zN;-hs(Ov\Fp4ܲSZ0ӖD<'^n)41̛+ >HÔ}j'e3ug!S%jmKv/ =(M@iJ1栍tJ>ZY?gdp2 f5Wm X~f*/yU5Csr'6O9-?}PsT~]_}9#jo~W VYURw|y?^csWl@onwxv~ǝKDQjSh5h6;s{}w^IYkZ-Uy :pwՅmz_p_Vߧεdx72_o_zƗZ?˔cyMZN 8D6v vfbϙ gjvA&IkP>@nz!giU}\sj'' 6! fR:x8RAW2$Q7eNIե9,+${A2%%gh(R!\޾iq\ƻ֋׾TiBmkT7\H&L}e0rAKVygVX$ j^ ,!jJg6KQeQj0iQnQ&v^zN8MO'TIGb&&EipY {lD >98xI݀IYi@S[~Zs'ޑ],"{ o[3+=[x}9~0Əڧ6,&H/Z\ׇx9"1O KAR$ B&1^,!nHva繆\qbN&S͈-2oU|<' NUߋױJk)]?'K[okm>]->\߾MWGLZ 2.Cs⍻1Pk($iḲdHu`?žɈ2FѶ ZKkm_-63܈٣(v@ɶ-nk:wkp@@ 3S,vOBz5?|ىMX~wCcm}c?Vag߽?^?G~C`x7?=07O9>2De!?Ei+i3Jͩgߏ1ĀSgL~%zs4旨= R Ԧ{(l$ސm-wZ 5_'w307ԥ^l1($ &O̚4>,8uB"!A!Qf-4?&2AscI RUUwqpJk4S7 #Ǖ33Y =MHU?B?^ݣ,xޭҿ|pڲWT3i:oݢ+]96J%Î+c Ye4X1)ɳ.[KBAfػTP U͛R{ 5IFlJ%{!hEDA. Lzh+ 0[2˻!II^TB"+N〤 x(3z<PS0q/{!֬Rl$Cvl^9dD d0}B#k3ӭ _h{g W׳8-=L2{zOݚzjͺW׳89QyjܕT6.lǴ[Lw)xަ=BR4}dLmPkf %HRp%%$ݫYxFƀ!IMyӈhG(GHoνu~-{1%Ç>Y :}OJp7[jA.Iw.cIyj,Ý ExT6zO s>]09a $.̲{~DuHȩK#JcIY;OW=-ls|?.X  jp.3"(24MaT&Le XrAt9.}:k%DP%휮͆tCt7TΦ$L]9@gwꙻh>C;}BRR~Bk.QKZӕiv8T"^ۄj;r]K6?Y`iCRJ Jj(@wQ}6[.pH=jI`CZ9B6pX$ ׷ Rݫamz^D!S8;}j\; k/]'&]SazЧ=bn yė V.*LmXʹwj뿥MG`[C`)v4#~$ӏ}{:xJ˓ /oEt "䊊+ASsd6F6P L^re]vX_a?Fvc;oGh9#F>]Bq<1am۟Zk&bHvQj#bN/:@_qNaJ(hݰUǛXp_t{f,"aZDthؿr}CاtYjl G%Z#}E"{y6r+Ҍq"Պg1DӔ!MDli+"g8! 8V$!Xd Pc̖8#CX3!b[ iA f)49Q`ۛ\J 8c2IchjTOi(M/(yg;Ɋ0$\itZcM80 p *E)_3MSir]p,M{m0ôZ\>eNGDo^9S.9Wn)k`ݥ"4WKmX 3J*MFKNx3hYF  0[d 3'0 ՜vi67Cs25H6 9W=fl7M2r|rc)E8 Q#NGLkbklIKwp9;\<;Vn蝶MOQn`zHol%Xп0,;RH(&c}_Y@Y4RCxcg*kXeϲ=,&ԬS& `mk>XnÖo)lQeJm0Cs0T"2$Q ,:P39c(:k|>O]5mz~E{0L7e樼5F?8n.) ^Lp͆8ΉTil+eReJ1A4\DE6'20e,/%F6U׫zXm@ F4e8\btRmr|g6ԋikkZza,RV8:୵C;uNbh[槆1uzLhNC1FH8`Mιβho9.pqG`(pwWK!WXPAt/iwGo&*T%7 o*Iw?{6HxOe񄷇kÄUv]̦F-fSfYS^VqzX%zOYT%k:3P'? W$+z۴C4d*!g侲 ^&k86/73ߝQVϜk;8Rlt-fWAY6IKt|иhu:]}pMK_.sӽyX Ǜ1y}Zw:sߋkfsBQC2M7:tLĿ񙶝:Տ{ B- a"tq?LL qg.Gc1w\b$KN2Q_y ?_Z:m1Zf42H$)!#$4w?dg9V w[.Hc稦6&J'cXDvDܥ!3Qwx夈0ݨzŃNM:I|B.V꬞ :'M aݍV@j#dT%K೧;gx/KPlͽӕ?y14izXFwl K#;bh9;0a3 [IX +dO p}2{5=eZj ul؁WS;ϺatMا@#Xj _(~7^ovhm ~粦3YVdM+&xS@ ȹ ]-">Yj6az0ӉiwqU֩[A4C;+ʭ]N~Os*`vvdढ़y~/g>ޜY7N{ z G\Ap63T}=qipj0 (؞sB Ρk&0ʨn^|zYW7o?TV@#WVi{+W!F' vV3rJo!QK fp)]3\6YsiM+}] _o >Qڈ0NdA(.Jq$y'm<hGM PkmW>թMnO9&EtL}gb{ZO jS('<9epޅ(e*\xq *zP:X7p/@\Rh3 2Hĉ@&)C:Ex>pk!'Hx#졝׷^k#Ԓ}OMo ?JɆ>&.SA%TK{/xA Aupk9οyTg^LO@_?WH#Zͫ>/#6w9zy>ay/):&_W'4Otջ T0˾#(G }v)d` 6ޅL:xz߼ZfJb瞾O`c 7WWܱeqN&|}b:NIﻝotﲗ*qh+Vcfhn>.߶Թ@V 9%Ph$9BP&AJ<1$בZ [{.)/g e9W^AMs=}-^>3_CbJ}Rb[(+fmȩj&%'eTIKc5dWʬ"+ҍ֟x~~m ¸hu?;})\_+"r-[,),]8Qu[ F&iyV~T+?e ղz,ݑCDZmv&qդ?=vK՘*ӳPьRC?֓ݰj^ s [^OX0 [ZN9o}=rW!Ƅq<גHCtPп?X'JM]frec_P)!KY~&'S.!YK n+)$5w39[߷ Nq|i (mROl_/Z)ڧ3_z?*o\WXPٝiHc%utS;lB P-sLcF=ݠ|s _ivp_xϸkBŃbmQ\cVh c-mxW@SݱT- ݠҁj/训ַ9K;Zrw'v ):ˡ6Bt3+M; W*0)ҍϿch JekZԳ rJyG)*M:„z~2<ݳ;B5[X:?S:}}/g>ޜY7N=ƒއo(+}{0鄤k}y4¯  KQ X{N90*YE \!k62IeMk_vqE!?O emqq T Z"HisB;?!jR2`Th)wXW9YDV/d}(P=BWs֬Pe/b`Y(^1{0\Ѻg4&dh>y>rO++7Oh`&,3hzũ`J߽o8<2݌Q_LG70 Ud@8i~:kA/LWC@uzfƈЏlz=E}.7;4ҩ5D)#wo#y>w^>W6Q.r&GTSB$N%ʊSx kp8b+ 2QuzjBYslJtoҧM~AJPB07>%#%`;JQG19SKͯh4>hǯ_-y{3YFhZqs%POР?a޻"Q6x '^0|7d- ڃ&;)ꎝ 􍮮CR8,%ng-HrŜGu:x^t{ {THZ|nhvRGi4p0y&g6Տձ!z"3(3A2EkuIPi|Q #L 䝴1B5JzORH`{!Cy{񋥸"" 4yp'8е{WsJ^ƻtǷfR"4􅍛4196tsIʂFDy%J{@s&E#G O[n@^ԙ ́4*Unt]0I$U3цXe{Rix5|3̲Ϊ'} fDxÓV\#)I`9uJeJ81, q?+4/I_~lk#Yc06nFkl<pD s')g^..q&/żښU<H*<8ѩ8cмJ[NIyEV>^C(/jMSC ME`QCġEur0BbDU' t`"`|@RMl{6bG%ݾeu`咷kG# ZJNn`y:;OVzxU:sI~WHnBi=,{v\,u^qۈ'rmǼͳ؎wa;d{:vnxNҺ*<&FSjvk!eS@ ڂh=8љ9mrBKm[0/mL V|4&1 abs)(76$.``rσ?u$,/<1q5\RZE4* 7d0Q01 2b7SqI=`~=k|zuY)!Ze}*%o8KPIW$JU;{ dDyCdj D{:@;CCdFhL&BhR&hX FXP#Q0F9Z:MI j&E})Ɠ%uS 48$S( "%5&o <8L(BΔ!Pt& $FzT;bn.P088D F$+pvߋ.D8"(3bDXRID4ĀTE *!43FJ |$7]'\w:'2Iz„ҘZQ*p^pohcI/lv_v́89y9#ё(8Ґ!^ 6UWWuD/MR}bR;'mW3 SDTj! ;눰F:Hf`[ J29!j'FlNfoĥ|y=\ ""A$ϐ_R4_~~AO5v8^μ˷{wW3c)Twk&g8 l8q$[yw?gO+L}B|XG.R.yVCK-{ IoIZO-Z:cǻwhrg&_pR'\Xvk4B_8(c8\+#bS3"ʞb?\!|t! 퀨SZj˒Gd ^i&iA]UQ$p]1L+#ē\O᩵)zPU}:PflpL0-7=N?N`}U-&^;?4(ŝd6A"oHP+&4M&i^ kLI݀7ωOo92C@S QTmWU,RgZ%S 0=$m, M0ulO*ʴI…N@[c%IN*ltS~VPPW5Km`2c?P|'[ FOCCR ͿVC !Jk>Wdb+rr= eB) ĉͿJKI}#%OXSDO$]uJ| .x>i~8`p8%S^GJ;O  F1.qN7"U/ސrF"7W9ݵ Fؤpz%k;TZI9'MB3"$OxnNu"edkf0B{kj :evYaDi?^}9v'ʠYdI/T6ݍ~ID1m fRڒ*` v*<ې*on:(x1zM+BF-(\_~P$8!g7 7Z%L (Nd"^HF1n¨@SXhЂoq7)'j U\H֙}b/y߷cʺM8U or5x$e&LWuw Rưh36!TCkwm2~1:!*܈*ޟhŐ"gv*HCj白Œ^uTIg`6x>rbƧh͓ͮM2uJvA#FRˆ澐x#c.,ZJ4(!Jm1Ukb)ūyk59EaStםqcr2 Rd|1Д/(8ֈFw ɝS!+&JQmj;wVZHz"l`lg VFmieiY! E$l>׵W3iZ b6t9򏠐}lq\#P.NlnF#oџSNH6IQIXdTGwn;@XkQ;W(۾>RV5KOm,_˓ WL6y&eEӮ#pt;6ihgS0^ _DMg|v&9L5;tRk\t[ cκE7!D :% J~UuVpS58潩5I(G˜>\!~EDaSIBdJ=|ŮY͘QJƯAw(I|řO5pgtdkJt”;]i[Eq\^p!ə=hTDCrF%FR3O3$d>m>VT-bč(wh cV]?e*(сނԊEv*gW^14kHf 4JYvzj1bg[&M>ЎG5#@ y慶Ƕ:lpYᔝ`u#qK+!8:xfI @H#N421~`k`j6 }kh2:bK7_Fw[Sa6} ^-bbT( 58:Vۊ`fvz9^G1bSf5?M\:'Pa^} 3Y#DZcm 3wғf I;9qŘ>66'{lXBeqrpLw̘ab*M'y #\k#2.6Jaд:|ST)߰Ӗ%^cF=A|uif9\e,FSM^1:| V'a<vseVN, HòmY XKzx!X'nLfX0X<@`2Qd$U <:9ϸ"5YWXBgpnT7lP] >?=Jzy8֦QaD[I`ƃ<839 ȀQv*jT-"3=S+ɀL&qrߍ FzQ0 LJAyo؄n^Ca03].{qfF Xާ?,{sz=_x`mx'|xwW}HI'#o3'ӹ||s(eSέL߹s Fp^ݻ@!͔<^uk p7܂lVe3J0eDCz)C+ ;΂% J4#9 z[cfٛ;N8ys9Οqwh*2.zogJm=t2S'f`=̏4ŝ)7?) ñ;4?sT-{QV |Mr_.{J-^;eS~ZcZ>躞GLcٛjOCW{E6/OYb#[bcXbj;@|[kj(oAoMV$cpU7=/[$lB $lB栘YesӌPfvgb2'u&f-:-Fozu| &@s\ZC)nE d BwumH_1rZyLrnn0?RbdK]bW !FgRVÏȊx8(Y)-%G1%ĔǶ4u(}Jc_u R啫Jd I*F$FG!ސY|!AEGDI:y$%~|?>_Moiri[9hd2p<l\:n]eE5z,Wss*<=KJ%'1Pr±Va!yOk*ŭDDJzdZ2FJ+TاZշ7X hc[x"ߠZ/Z_rPmTȔwޯ3$^X _679#=قq+7$_\DSdR:yh -Iv;9%X ij/.eJj2>)L21JpHPƧѷHA25$efv.W*xU@D &)0Qz嫕6JR C4"Ic8g`$W 1.[*$H0s[ n4RsQv{J(B:G9Q9bK !qF[,R tV22 MCNqȥ$ `kE~T}]6CNHt"vg4S Z,5$w!8@k@I)Er=3%f`QLR`pfO.t pwI ohN݅-uT(r9U%-!$B-\YC*0uQ>c)>S<eqmOIatZñD$~: ^Ok$A'$BbZ-.)PpY$'*H+9́1 ( :V`/eM<3s-<R8!`p :(pO6ȵ+JɈVO)%VR !n*+d\OFJKcxŔ Vi -v%)I!9$E3p$b4Ti TV1)Ւ8!Jh+ь($9%i 閊g 5T< *TE}]1@<;-)x pk1]%׸Rpb%NO+(vw 1u”Y,XSL Ԩ) x >[4ZKv"]_шxzp9E~!'Wޮ>1嚪:z;q\󀗌][p_1bkOqU$ 'E \JeOxXN]])S\dO"FÊH#gpNiEdOLL^]Qkgޛ%a-e-GhՀ8:j7rzۊ:~11*hCɰx1-Zx |<~<9h4⦃c-s1۟pz1]i+.n\$daȋ ?=_/oP^,]4*/*!D6O)-4^Zlk/.kӌ?\]q+85ҔT%!#XZD k8Ǘ^;WH 1E*Zg'Vg|`tG $:2C`I/ϭ}<Ү\-ܮ6c,$z?bU=\PD 8t_@/9DjWڣ=03$-'Y*wAnZ!%-gW`^Ŀ z݀b.DWp{5x$.,TceՈ>>|G2DŽ!L+QD(1xHm:hˎwQ[3xB{Fmv'.j9j&:n3ztf6+ͮGߧbJ~%`laMt $XJ=*= U t]'TBb2ZCrQ mwCEVobh`M\EW̻;%-!BeUYC*$AWo Q$yu)ctcG:Pf|[⹽ITԳeP9&0]D/&zQ 'N(Trcf"!nNi}TWOq.: ~&#S8 G% =)1rpJ*A {SdZpJn)(GΎ>I8DOƴ;Ys*g%RdcQD+D$r>L*F$Qf7D8z#$#-Ni%$.eq 3mf]RY`z;gF} Y\.ʛM99Uy?٩Xy~7_L8ɫ Ug <OX.7wAKw0^;?d͛.!jw41XRKα;!8e}QJl } 0lwZzb1 Ud.{k\.%UcC4ZycQ.PqYݾ9!R94"1ɔn-"\S.VxCRys4feQ\1XRYt4vC4$\c.]0}wnXS0YP('jB3[/Jr^+9tusuObĎ3F#O8o6ayƁ i]PT/#p%ܯބkA@Mˠ^,j _/(m?PeDc}<͌QIЫ)ՆNr\Gv:}l~.uhETC} L34Hl! ' (7QR/_):`J@P9xgA+Yh6gp2@L :WlvUvm[? XEz&BtQخI&vN6IؒD^  Y,@p(weTy7[MOVsV)|0kucΑ$~ڙI6˗wtV1 }oIV4>fp=$]-H`p&9e]Qqx>iψlWc`h˄҄Ӟbq 8+ONlFkhG!]i9܌F_+n& ;.wЊwן\?`s4vh\G#pG)n^>agNk.ks2ϖf#S[> e&5^yҫ&B@AKDuje#Aշ sf0p>GK="וr˙Fq Z'U00 8zr ߲K# y Y&!f\} 92/Wb76&ܷMY@6[LA4\֖Os5#|jC0'KԪLBܜ*?'XAm4[AQc*tlgLfdbxE+5'g :{L7X_&w;nAL Ò˸.%Wَ`dc lD`z(N4<#_4]1Ç`~96l_qҩ;y[7^fa5no~_.(U ^)^L?C ctx93.ݣ&J8N#.hF;هm{r IҌD\gzﮭ7q^ ׭ݼnA$va}'uG1z#"B!l]ߡ/wu@vCeǴ<#_2X,F@ 1'&Ut4';uQzJCtU("|!J C-8 og±5S>=T7}S0Ev1:IWtΘ/N$K$+" fƍ/noU˃v֑\)\ noMxWzdPAz\LJ>1 LQA{oHF?|f7@P8cΡ4q(2C5Ҟ7&P`;܏?<߷}87ՈF*^ opn!pW> t*4A/,WeVKegW xrV^5=5 EA[Aw1ZA0Ij G%4Zv5qNw$tY|sI9>k2Mɻ Dv5fnΌz(xHA gJ`믓i5cQFT o4 *թ2sUw6z%ܜ 5囤w ~gٞSE#%&w?)=V}Ww $8cҕ3 ]n `̗0욑L*xۉ+Cum iT#@S7ϮHF`N$V }<הAM?S'g*M|WFPM9bƀA✢)*8w$D_Sc?1Mڒ J;Fi'@jo:$mB,JkIv(뭺҄RuF9Bt=rSn$ٙ{Z"|"l0u+R@g|H;Ғk.,R _OT8tfy) P~Ԟ=iꃔ OôHޕ=97%3ArLs;~N0ү/;ޣ qӜ}br IW,0D~D? Nߎ4SWi]hs䔱$2oh6z(q*tL)ͩBJ~R.E.wVj~矐rqU{8]C\zZvSS&:vGseHҿ# ৞ sJR]!7 2EiKe>ឋX>U;)@EFt:ur#g 1.?c< Ê̑篎J7C6ӘgM0yb}Yn5~^oK f%V;ލ۝LNGig]yIci\o -ܕ[ "5]w'x9W/zfkd_d~5Rs>>PEy{sq*W|'ݏD}yYxP!Xn M f|H[LoVhn ZK!T!%7#)6ThG." %RV"BIu{&8I8⮗6Zhޙe̒֝Fm܊J d:Lg[XXd.0nQ4P"׌V1|P^#)cߝmmh+vKF_SλДoi}.f-2z\rzuuL!PO 'LY@Ca9g*]e@ ךVdLD9a"bo*Ƌ"WⅢ#Ė}]CcСJK;$-c\w\&_헩6ͤy'Ig `G+$IiF%4|bo}U85ETtb ?n}1Ad˄1*I`@(BF, ӡ2" ˈQIЧc3ݛuQd }8g\΋2=ħgN$11=4Dd1󄟰[i'1o}ܑ0dID<,U=q R蓮bno7 ][^+AX9|&ǟ?6Y@$2ê$P+*AB1I2]E}3bQu">w=xC+XϿ,\\obWiO um yƄEpte-TIN Hf N5:^ .~~_O=r .WA/~͗  *pOCX;bZ~0R8_0j! .c"R"x%^AsUJG-XF,`O īR@yOmju?~TJ '*ksMb( bwݴ*hYvd{$SGCS-SMp+}Iid]1`@7dpIn>F8;3GyV?ćW _Ǻ+?ެ6?ֆ<N&j)i {h8x-MJ'c8Uத%dR&}=F, Osa`4oXM r/+Mـ\ Z|A8 EO B{ӪJ+E:ez74d:?Zv̏: ?$1CsZucwOG }kA&-D\m*m-4DR*u-[ N%謫 ;6t){iU:L+$K?_9ü!wDEr~aPm5ZuCr :I(LR%S"y HI"[8 >rPx"V"oQnkx Ætb*b*XT-y9/t,р*Y-(lRPGY|=<~v4|+:zgp S! 蒄d;DΫH$LJI_joMG:DZfʉ,J|]\!g\$0` 80$AXRR D5"X*Aj\:(YYAt$GXZ1-IT)  :A .*b.Sxhĉ*R Ci!P0 c $>8ᩩ# UY8YF,yh +Nb6OG~=OYXT)qC¦#( &,1# nK"68a:_6U7Yv77nrϲi6 g[n먄~|p%$Tg F.svx 5q;PWgƓ >z_( GٟBٟ5x X$̯ŭ0-@`p BgU:^7ZC.]!c\D1'`̭1^k#rK4E)%XpSN0 N2]5ypeyI"71MDq# PpJ AH$d* Ӏ8`(QhxUc T3A+8rHH$։( KHQdbkD;*O|L(A g4qgpE*b֦WBu6wM6D%~4o:x\Z_Ʃ|H,UYȎ3?f9[XHDBa'۠gwr5U@#rDt]U +tH8Ml`\  2Z 6Xĉa "!6!"\Z\+H茱 aIl!iO] 1J\8*0`B$p"@0Kp G㬕y!C3֯.϶WAHjLDlKXVúy#nu(nܪ+ˑ+w-g}z2545 Œg\E Q l7E`QTt/r'_BQxU:{Y y*FS6aj)<}|ЬeǃLwtQ@ʉđ)],w{n}^mύoύ(=7U Ub *PY9CMQdK"FL2IC+7ũ2JcM 1Ih!kbPsb."'Ƚ=&^ݿ2wضܻ2:OY{l ժ~ڗi)d.T KU Ѻˑ. ^(e=Tl&7mYۈ<{a.߆E>dkQpӇrQӇ5N_T@܏ti]yA&.A7m)@~D=Q(AXyF! r=t̶]F-n74V*4ݚ;*DPlv~q<SLUTlfֻ~Uub[zw_ET^wk{wŔҭ_[SdV`4 0.[p4HW7k;{Q۽w"!6u9vti@u=r9b\ƕf{<:f'ʑëy o+=;z!#1i7y ވhÔ7mVis$JT ~YhһC!LN9EBa,w 4(=ړ뵗_)\쌇 c7Ha$d41TJp8aD34Ŋ1§KFcHLGw;ws|x |"K8 #n}8n}Ƶ1g۟K_<˳z+8Q7\W>?}\RT^QH!T`%vR@˘*F!胄H(nВ &WeDxT`DBpZ3*s5|4=; {+hgK/ tcp{g^D?>MdbԆl6BSB;h]{V(Fsu@)E}SZu$KtIӱ2mt}Eg Y&ӄ<[՚ak5ZRף 3w9|<ӍP>zz#@u~B 9}7aWAb s~??Yn!$]tV>06r+Sfxw+p0/@LwxjpfKTj1C⍋uaƣAՇ?AG׾/ovhXFS __A'v{Q !7r'*Q^;?64);JG ۣggƁ7UdR`GC*"76]Qhبd*o$CxbhEd6d+-60lĝM"F28`H5XX#d!HP~,'m~F?=H^ ¤QD &RZQ$b#&qqǫ!Jb_Qt5UJML  n#QnSJ)TۄEHXE a:2K|FZk!w[IJ_{j) Oe0Ahnp,z)ǛB\c4:hlP‘t{,b\FT#4A.5G ٞ89CpП@CsПFdMaq:).D65YU\6=sV47Xц@\\oTUQdSfN-JTTIڐӚgq*v<2/U +@R[ LlNƞk1 O:z:x|.lϹ/F˳e֯ê=u%nb)%^alF\)"%&f4qIDQ`%}Թs^SEØDIR3x+R3$v€^&F2 B(hr j\i Ҵ-49*$TaX_A#⫵&㎵c cc5;ؠoSMt~g7;T7v%dU-qV4//`>z +OV$Ziu@F_𼗧6Mإ"b.nqĜ)eNK֗B]MLF\S3Ajؼ9Z.7_*kJu T#+c`LS1gID\󸫭̓Oxk yFnӣUn5#ySrXge$&pdϸJ"HSh#ŘʺKxV5!d ֒btȉu(ls~z3U8[a~ b0ak0v-0t.y}'g^zyyHvmF S/ >@A!]S- /!kXC-*kSUňxbw޻.lunw{lhh׸uU70@\76 ּ!JwN]ڝMoyFU#VGMx@Ϝ1 G^չ (kTH*u2?6Cf`Mb' c[D`k #ߙ*r5+ȭA_!=Oq˫ף_#1znEo[SG1@z>_ѿAaoW`` _ç/+^o~xWپ>7itP!x'ùDPcm+T+;: MGdWݒ;>-~*t.O7ן}3 qe?\\w5|ӻ_׿+yѿ^wmXdNh~L>Moiew:u'C֖Qvn uE7X<s8觋/ϴkޏ~ū_ߟovW??ytro'f9ӳ_wIlqү/򻗃I[Č?܄[ۉϳ_{r<3mǯſ3^k\ o>%M?ɾgCɛ(bVf2W5rOc3:x7I ֍_x)SHfe麳-q*&Ks=]wG/I>xr;fo3b.Os]_ EȈmj0͎rڃ?0 ӯ+~zʹgg/~7y5ܛ*9_w.V4o<}g<_Fף_6G6uͤ+*Vdt76u#g2d?FӢ4jt;6 |b,⳿̺pYg߾u٠yכlEgBlg6]Wrczލ]$|PXjϳqrYCf]nYkѺQE)AHZ"MBIb̃Ȭ.*D?좾qgtL'6ӥ [+‚{lYInb`xwP̹^4M0ϲ|R\6` ^>~] !7ʽH+3%IfWvev$ou54D3EYńADzwIw'2N5'_)B]Gɑx D33ސ4N)f4~t#m~nצz&4^7OSzw{_ݞݞ-n&2!R) 2O n2K4 X$Q`oh!:PXDH< Rٻ=]s{qŋ~A0DžXKs,%MVRͿo]S!N[M?nq\QnMlSiM- # urzNkw+ !!>Q/uB0iug+4=8oUq-dx$8H`UQE)0FB+N X*lf;AW﵅pn/ۅphj͎.pn]ѭ3;xMO@gh`sJ@v/`".(u.w~wy{jfe[jK^b7l-t''Kަ[SS|NOȝ2oO7鲖d!7 H lhfNTB! ݢ ^W_ ,kXC ,tyc1ְzͨdc 1oaoW2pE@~ 2e1uG"rDhà 6DEq|CNQDnk_6[Ú|]hE1=fm"4 J| #ج 5Bm9>e%+[C1TrHH]Kh޼ed2rq0;<\:cBUHߚc5d [N%q@`h&q!a(8ׂ %EXty OFAT@IQOHpС&J<8H☨&ᤍQO?R ]&qTȖ|K"'ځ~b;76 %M M9zA{FmZo{l?LrvonTs};,5 woҗ M3 핆&~ 1$N'DQH#:ϢֱH,)%@#Ql~B)d5XvqoP½ pKl`k%h>Z&?g?} §㧠˙n:1tuifVnfFof&tLD 7Y _˳P HEϳ B$YtM*X9~- 'u]'Q)bל_3}ŵe]?Q,0K2bvMNw"m~M}r{aaXJ'_/= M >JpQCMy1*"H H#%Ezb0`dʰ2.]Bx/6>"k%[K JJIBY8/jEu*ÅgC܊۷#I |unMٓ53;2`[# Gn7ҖWDZ֘ >\~wP9.u w+^r IQ\wTP@ڭx]=u2۝s 6uE] . ;gg ةx] 96 ʲ;!B]\8-zQTNp;< ~I ˙K-yG0N[Kܰu)a,}#{ Oa԰^yw12Uww%r٭StQ3RL)ҎP04H/«8>6)e`Խ?s*\lht@鬪WEg.u/tT:wlU$lo|yzӓDt&K/[M[wgĦ#mSvgrLnf t*JN9u"(bVqbloqb$DrKDRum J*6W=XkֶO׶t .6 jZ9|Yy'bC`*x``{C E$كce^zK+s]vGڠt?س־fm{q z#wjԶ!C*CZJոl[.ۄhߑ'@_R5k]k?]jWGp(z y 2ȠkʫdR,ғ0Flفf :l%p4|4lz\{E.Oa<ʒpۏ{^@6Ǹf\裂G4泦u0@,QW$ +V>N@d/Jҵ%MG p(k8侕W k|hҎWp$kɠ:z϶SEp ٌ;~++^!xnz\e>1.0U"8)`Pm,9A4ԘIm %$ )H8)LYA $1U'^BDV|f\6k"k{mc[*E>hg8eGHר"jV 7JNO7ؕwh}=Z-q!@ T`kI4&"ġQuQ CN9tv.k@@9Tj“kwR W;SNHR!;( B eT"$Jbe:jStylȐ:B3{m=P+41f1s8!B2Tf1 "fva `CE.F <:ψ:?)>6VU&_721aQϱĄkY:b2H`GM )DžPlNDbQnX88+>2dAvt/V57C+#DPS,800SӨU0c1( !`bs"vbK eLU1 7ޭ#jK 4޸ƻݗxi,e5|] m|\[YëwobcϦjhŽք-*)fh@%z`ZOCrooУ_49Ν[AS#9)pQ-X*#kp&&$ys9^8wvn3Poȶ.?pr_`/~ߙyoiwޥT7ٷnns5s/.ZvW3~׷=+MxZյ,rfVSPW,2\ՎR/BvZP6\x2e |B%:KNN)3iyw~h7?*Elfatz.k91Om#gJl`y޶T΋Zs"\فy՘ VO ΒEkPE" 8(9"sΡ ɫpp)spywW;qi.%H'$&f'Go–N!os,HAvc֩s$xBYi E`ȴڤ`^ʦR\^U).̏SV/ N)yU>bσ+G=Ijv080F6}zvm?.{2g@q Zvja,[@M<}7ޞwwxRQ(pFQFJѶo=~݉>~f~n; pUDEibtm/@.Ʈ@QraS^%G v @M~/eXJܧ$N d"JD(2VHccQ%R1EP$xL'!U&$Nh1RIBDPbXBQC6xEbê/EacI#97ٚ!G,/]֘Cw͔hwFTt-sP_ųT2{PK3[i(4H6wBd!ߛn{$v$xs…ˆ[1>;@Le.6BXkm8ˆU-@OXl8:ÄC)"e8Ë(P^3i`b0rɇ.%eQ?wɔ59zi 83]O\p`cV9Z 3g8M=" Д4@SMI4i^Qe\#Zs!X}"֔[SlMI5 >զVCj.&%y̢հYѬ\10n k /Ra?fTEJ֔e I QeF~9 ]zH)$qY-Ts#PnVUK\Si]jJ:)f5GZ=0¤* 9%DoX+rmA-IL8Fs˝jCsզΪz=AqܠvH ׍m ,geL>Q.a8: S% /&ALɺȭ{Vg} `+e2lALa,ҿM>2 cw MaΖaoנ98֤fV8+1EX%WUwV?8(]5'6ݹW Iww,ްfu%oD2% Fny[}[>M߀tԖʒh. B*7@ H- d$rS&)ѓݏt-Uu¬y~(>|/?roiT:O؍&Kޝf=r,ܰ:aV昛 7'R:&ŘŦ}u 9x+^I #vfy{Qe}b'=^>z>0회Mg{e;Η@ *kzZ,4筸jWً[8Y~!51e06檣kO2Xo*G:pĐI{MS4^`'F}[ M@F`Ή~kZt'(}LͲovcPxw8;R ߻.2`||uأ[L۫ww3j);%w[(;#AEp:Q1RJCҊ׮viגU -gt~nº5{w&fdIH~?R%'Zڵz rrV2-V "f"G٤)jP`sZrtmL5Ξ=ưB%ZQv.kiniʼm;l+NsZ6^OsK4;qȮAe7dx$ϻ{? n'o"q(!d E뵅+= ?I;RIKJIgd Oƺtrr'Y<&CnmrB܁!7C d 93+sSYY\*ȀLG s)8ɒ:C5Z]'Wͭ¥xO_] ڟZ̈́=r}O8&eis. DlqvL+j!As]JDpi8rz&X%8 %aUKj oנt ~ӌiu#_۩85dXShtS N;%Y3(DXnDRЏ`OgS K#{O^TDM0H06&fr">-HٚL@ s1(M6d PrXV 4&RrCTۗMT@{2n? eaZ&a;r]e^Lv1L+Ezu dǠcZB};_66(l`S ĆllRXv6] .J&T %EzK*ŚR0;Ⱥbb$c;T]e} oǠ@ \2z 9;%4vNJX7'm$4J 5ӈ2#Z DnUQy)}>zAWIUMl ƒ*$3|Ϣ[ MT[҄ v2s a ˅ Mj4YnoǠX.K.$7~jn4'}18A*676~r?Y)]^YW)Q~+ {4Q(ˤj QMꣴ>$[;9|JQbNcUђdxO{!P͒Eؽ_h;iV.V) C vSM@랾Mw1@>ïo_G_v-5~ڛDFG}]j {-hC*o$s Vd[80gQ)icsACLtA8pqɕ *i3 _nE-74Q_PTGV*ˢ\ 9[D 0(6 }T/J(Z; E) L88cCPNcbϪHXTMTSnG%TSlFPL3zrI5()r/ ِZj^ۼAEI& mȠ95Z 0XSjAp #Pd$v'*~rT襊rűid ɄMSԒ*Cܪd7|ݨ  X5",0 JP!A2vZ#frけklؕL vlq0Ĉ췈*!؟jvd]L뫟4σT}BfhS\/o~;6L! tZ&֘/M\`JG J/\TxzӪ^ۧב^0>nAu5( ੁ5 !+2WEuf}>⺨#*|0EU%{${8esO(޷% JіV{SUjJJ#`J nQ\kZW ?&DںEts[w 9 5]mVl+cއ!%aHD&Hwq6A+`L +SC0ɤd1MO ftI^=P.Fj8t.(A*_ %wAM>8דq.(A1􈝡rCjU QXd1kp;AHVkRQ@"B֎RQ]_umc> xPB+Qn͝*YOW~{ 9S}muhxh'rMWM:3j#;%%g5 DP$ۯ{Sk.۫v޺xhXMndښKλ54MRsG5*Ѱ_a2%BeCQtƬљBVgr|@'(I)򬵧69 عJbϚSf ; KELb1̝mPɝdO 1aʆE\e)shqڭ,-+f' ZvQK\&kjN'C]-XlqʌPxN*Ծ56!bRJpXUj́@ڪ5pzF14wjշʻOƋʊq='T>xᦤIv$;Y$;Eƭfv|@'Al2j2o-KuGVmt{%qB%;$o/g'4^ң) h B7NISbRQ/@} p\lm@ -˶yUGTzOq#֐:SR4!rT6b1+cKÁLbyK _{`R{0Z"WK~O$5I8J^Yd*#WIds _e2q}\D9l\>OM4! ڌY/3˃|{|!IcϱA^X$CA[(jKC Xo 6EG h"~4ԈMF+0cpѽwC-g`OeV^Oq!"rUw;w{4 ;Oi{|ؑ4+|*/i0>)w ԱNk|.ѩTyfi`,ZUNa4f*?WGT S֑q&cuwN~ܨ7%[`#3i(vlet>Nk5'[$I4%:uѹϐ?K^D]tng!( /.Mgv&7/MW~r 7L>!w1||~m\ J6xBw2&l'ɁDpQ{RDN^*9ڰrK$gcWB P}DR?ptI_|j8e=p .̾x6wo5enkο>v+F%0^e53_̵(iI3{yhtc>~7 o8- \(ABȒN\ja!KDYp`ckrGU.4!Zj!ff%d9 )镘Ti)GMJb&0Ly'~sB[CJ솸[ K$_U_|7nīe5"%3w| = PLdc?z;t7dU {o_"_׏.x_}A<<1w]/Uë.x2>.g.Y6;N>%O^c,ϔK77 kH/8"TM?zqBHTIoт v&RJAnUW Ѡ 3,e<М9ErF OrN2`E%^LXIo4 :w1.QSV{Y)38?1xi!f./Flxq~.Z*ߗ^_C ]LlW6*})0,_>3N ʑFnܢ?oȪ,&; x`}pc`i\4h(LRΔX8۲5wQX՞3|6a [حIe9{- U T'ϐ* P$H&5^%L +m>635m:/⭚Z7{ӎqBs cp]B$kɖĘ뵳'n>?l 1m5w)l~Y)*HN5X5o ֽCdr2IWCZ#"DzPJlUK13RX~BX7'#~H8lxd+~8%.1aۜ$lWlS!te𙼹M7<͚P MBxq8oD/$Z f5{ 7mPR7Çq>me[37o5Ȕ̢wnђ܆ hQP Yp6c;8? AKQf j_(-{&JᖴE D$ڈhPqqƸL0Y!oF_3W7?>wS *)iwi0X~gϽ ٻ$Wr_ &_.Q,kf{q<ÑH)v7)m޺j٣{< b!s[*y=܄.0 ,r\Hvy#O'Cp>s%,̇$3ɡ~tUl\j|skܙ<Hh r ȩΩҚ 8*mrf s:r¨ʯ,V$gA)mCwOQkKP!qmarg@yg)~`]CZ.촥)P l #XEw&WEnn&N[Oݡ~5T0gi|G&/l֬w.߬ʼuw3߾5ѿ¯_,#I7=xR,)Z2hDmg6r9q5TFX.h߷KeRi_E 0DO;KnPURґ_E"~i:L#PˇsW݀FK͐x'<(*Y /=*J^@Z/~0ܪ`Cʜ+>tw4Zx%y%e=78>vWSXfU,,X:zgk <7?Kս. *'/uq-k+Kݹ]' LT63(8QӌHs*= &e 6yT#HC j}#5-(%Ehm9f1K/g|T룛[]Pltn*SYH>YU_y~k"wYKۿ ,J~ʿ o|+pIRN,%X//Fpȴ%)me1bhENݿكXCrH (jJQp.P"kp\ D :ϦUY Je٭lc9ߥؽ yu*9сUICQ 7  P5Y J' Ud!!٧@ʟrg#D#GvDc {7;~,}eXWe d}ao/_T+Ef?ټB1pFQX3͸P2X=پvKOf8[LJn?omrZ=,^_Cƭ xy+ׅZ{(?M~*jv^.}t<:B1w>*󂊫V$k.}^nƐYbkzwŔ q$b491t>iFLySRl/yԞÑzvPbh,6c.TH!'d<C|K'tXy6<0< KZ$Ϡ6\MaGy|n,b󇄾)Z列2^^6ܐ)gD%=>2]U7?/hVa3 +9ҽØ^IHImq >d>GxPpDBC1F;b"/c=bX6w6qe-?GY4TC1 rinV⏏_*SLCW%2 `Fg x Q8EcN!"bmu CǍI:(ҍ٠`LA16Ϙm!ZzL-%X-I[N'i,mUlqH6(51^1HW']ա}EG M݃8<{};O}:vʣs`?aȪS{6}MO_xLXu,,N|eL8s aIA4:լ1M*sBݚ P__8wk% I*gPDOluqDt5&PP4yp@R‚w+qF ӂd " 'kTKɴ$8H' J(k*%IJ VFb+G[,ALBhA=!ꠉa4)o l=""MBD Vρ(*9!N&n 1|H|#pmp`YɇKC(2%+y*˵ֈ٬{:󽊔Ƿg LE?0#'"Q1 폊5q" (1cݫ4W)ϥCȣ`B'pTꦣ80K8fdSZ߀Vr4؉>x4ܩ8 S> GrG31Gˌ'cN%OP4>$_$Dhx rɹ2EtĘǮ h&d)@pn NJHT:(fL"+NDWZ8<|1}X-;( Ic"mY^xԳi^hƧu:_E:R `$ܔ#]N9vrvPLG.E9(|`N8fÄɥF*~$,.a_ \}ƹӟD@xXY^NEgze2Iϳ=)}_>t޿5w>>\/?k>xU6/@jrL 0:#(r[,idLz6O_еxqkIќgt@i Xzx>oxABDZW9|@..@H c&VJ 0伣b .Ma34)=)@Lk* c%1VxĈ7rA "1q#2@MHa6( d #FGi7^쨢Zt"o+A 3vu(z9K غ!qx~c@;:?]u 52Qx.lD^b=G)'.Lԩ?^}q<<~_^4|! !FW30EXk {LJ8R^21n[ B 8a~_ȫc9C](U!Р~>}MP G sLgRNo=eEsY!ze.)]' d0q\XK)I.VC" c3 x0 C,0Dfڠ^qBؙ() %^|[BBuI6 x AZH";`*/'nP2_O_Ey 8CX}HN $DOpzHs,CK կ-|83~u1>x$ [u,)`3HyasȔ:OlxC{,įe8DhpK/arwRў( ;n#X=bCNkHD9 Z (K4Bc=f^sA:)8 &{ 53J59 ^K/qc\Ryy1%*"X*wa `uS `@Vc7M1|x*Cc81 S]uI1{z0{2D9L%R>" ,-,qu>/xJ 56@wG30ˑfQ.F Tkّ P"h bCI\/e[d`n=ΰHuDeWg{_^'ǽ#v c2mL6_~7y;1-lF8噤PgRSQ|xu=]p+'7wd·+4+8s, ` ,38E¨ZruJ+3쓲  `4͟`g ma)dFZsb,( >k(?*јYFP)``JKgs^XZsه ρpက&5^CG~7[hX&0`U ș}.dDTCE)"cZNr$^ [zt%,ҟ'L*zV%N9dn\KԒp90T Ӝ M/89U8*N q#%CZ)g!iGUlř>f83j|@fSЉ`.)^9ы:Mo())RHИP rvVŠ 8(+SNN&vYlb 'K|WӇ+UF:ޗj-_] 9/9oLT+X[/Ikd~>;# n3ƫOO;gn]_\MXR~w~-Br%DJZ]tcA`Prg4y G1?%DjO ä D 0!`P4 Y 1- r[߄?~_LBYB?m{t!j%# tͣlb >褚RE3ATibn#G?Nc<<5YBWuk}mHBMZ;zg6JI%6G=#sJJLƜ=$=Ӊ'{*,IM8e˹)?ۨ՚kA^Լd%q O%Sq dEKE]nR҈Ou璙׈OwYۼmpǟe#9,cDawgC;|c%yPoyˋOFŧ]Ft~ܠDqye3Պ veoza7-$`ռ-y<pz?ց$ *LFa?[nPr|Ւ}BFLn؊X/SC|0՝_PT0r~kk\Y\6)fYX-U-! dj}פ% g, AzW9j3ؖn<&/6sHj0g!7GVUR2EYgVZ>6j}b.ܼQf|HVKqqlbV#< W0f]NZմ\,@NKC,촔pa6^\G v>J7#V66Meuٶ"ZZ[iiȨx? f")8-Sx1>w_+M̍Y Rc/1*+)NrT1첲첦Og|Jm]P!Yoz#1qo7'zkIIuԾ Df4v*Y+-yJztuW1 ,x`=T555imq.2z.-5ݒ9ZZt'm:ERv-M~fHˣd,UJ"fKXʲQXm#+fD F 0NկfB Gk USKZYV5/~.BVK/y j'4kij7'z-VI ۅ*dBN*4 p^h[55i⑄Н XFoiz$fZ"^((-_F~>;oNK؃ rމCx&Į'B etWUݢ}0f~YNc}5AZ]k=vQV6 aCч:$e%GٕC摀 rEgonü!K\\jBĮCl ^I.{qlBa[mS4C]%]ۣգloнѤP͛DPp8"پrfI,ƒ+OMܽٳ|ۯ~>brr>JJ*ǣ8y闶$͎+>gd&AX .3Jfk[X˼1(VIIr ki(YvI*Oax}U.&<$n1:EBH\TGN 9%H =y-ɥMFs(!`&`2R(r&DfYkZfK^@yIib*ɫ:KFpMx֑!E!8c+"#lR1H!bJ,j^}VZ]ϳE\ٟ{M%BN@nqߧWyc֛nWʤ%h|6 OL",0)0=f6B /KS"If%N&0 25QcE 0ɣJu}~"NG#ғL+ixk^ۉ{$'$~rM[zl|5ٛ|IIN/_k>ZWuXi2>kr$dr.VO&jPr&#$n {54t5ǻoF*RwQra_GiYf;ztϽ:}Rx2#?\.i}{'r{gn1)`䃀sqGVTT߻U^lx_gIdkt>6  *3nWصOUoQѸx@vfo *}HLeڬZo8_OmG;ß/DXï{S}M!E0N hg$<Y~>(6/?Ke/Zt~9&|I? j$sB1-*]|&/9JyxjZT ϒ+֝ k>IL\ʪ+ǐW!trv/'w7nlͦ~W^]*:U;T.\tn@{/TC'q`8AM;YBjFc" $xMcBۉZAXp|Ƶ%K|YBlH1j$n& ڮՙ  2g>O&7a" 97JYځO@ۚ!+yE* ܪ!&5XvFE zo 9J56m'jeNb%&`' 7'0 yJ%(n(yY>!pW>(R>(z{,Ȩ :NY9Oڠ(aZ>3O#",Yh;|R]‘AIlAIl "S堜-VHŽh.!PڎɁ@jtY~~v";r#ِNyc6^Hs&gZ:*dO(gE))EM B"1he9dHzӉԣ*Cn}Ȼ;XXXXQb}K\Պw5XCֹDNNbB%Q9nN2̋&cܐ :, YF3F&fAtL(Ct>dfmYV#P mdV',dӒvoo!9fh'C6+UC lҵ V L|>LSGazZ DOpϾr<,AûcU嘬*dUvG-2'Vg2C(7>|aC$\##)yrj9BIWQ6,9n8ȌGwAY9ِP YJTmAl"9r: B32G崵po%4w㻭n2L(ko~o9i7+FDt$Ҕss:bY$ S&єJia@(xSA4پqR+ͼ& ϊbCTF%8eT=_zZc0^Rn+ n Uy#1;ϭHÝ* .ֲTth?B*yB  \0Zfi*HǥZfLLdHC͋G/I/6}!EѺ0dŶqJ"QDsUH>XOG瓥zzuӜK T"щZ9EgcMѶUt=-/JDIEr'4Էvlwh6Iz6}~7n)!1+f|uDZUwL*ueDA&4A3 8mT3wkm4i8 MN9Υ,rUj6< $4FQ xd 4)HW7ZBh )v޽$8"I'LS Sjq38 ޥFs˸D6EEd?QCmΎh~BsˢYL Y n i] 2;gy()B:(5SBY~hh3,xMXopa12Ǣp[Jp2I]@ӆ2V,v6\ TKmCIrIll iN<4Y;9%`R.Lq @-@˄7A4xmZ;e2%ShXPƛ {hğysLj?V)3Btybhʴ7ltn-<AjMbF3IR9o sFBldBKA=TQh܈/~e>kHY})`4{~mb'dwҵtKBͻSRRQ{4E%(HWC͎ Py/b2аZ2bT3T"]]Nqod\_BQArkљJ;"Q2%Eo*]# Dxh:"!PZpDm|i"(~%6oS ټ̌]4lu+9,FNPA ErN̍"|cMA 0MD[JxG3Je5R }+uWxc(A`VK"yQWB kc -v+%J4;-Q/DJL0 _BՃ)tSLa@^gGWpuD4%\$13E7U- z\oRbX5N`lU(=ux}YXtS/=gvHMEShs6}ҔsDZ 9%і9M)YDo"+=NyE3Ȝf>%ʇ<vd 뼂1ߘMw'{?!mt)?NtS1|{Ae{a2~,L? .gny%g dƛbGi:IǻDED1DӇng.8pq w_/w9+8Uyδ\Y=9x^afwmVnqU ص@Z| u@L; V7$: բ`QTC1x^iƍ5A FUe0<||ğJ8r ?.TM?^cnMS\ g8%IiX+5oK=_@vʊUgxߓV-VU2B<%JQKu ^rlЀ<A"7$=$I#$-9; I,/m~f'dd&ٴ͵)/EMgwI4m9۞2Zvg<Ua\xƚG1bL/!,:X/kGQ;>6Q]s ũt+}{v7$Vʹ{{dsXL>vPi i]^ $SÛދnv/. W|֏љrǢ2>ANEEcX `pnCA5^T|{qDNAuʅ"T< AIo 4 <d"HD#QvjAm{sxԌ_@uu\(. tsi<xh .Hf&OƱQљ ̒OV*VjU1բ\3d$g %]|]-i~WB +$ '5(72N(4Swz.i4 pD~N۔)smOy JVMf>5C3s'p[ĢbskQaݡ]D|NûQ~:o rv"ncIQ5K|2/;F' #U(&8|#ܣ0SBMǧ{|_/O4^\Ph5WJ:-*8[iB!gK+j;ӻpܳ3Z@~Go72ol`k.L+O NogGisC %T$…,1',fűLE"K'ry:A wa;;dĬčgSȔOt6~3Ď/J^Ȯȋ4/.:Zgf=jzd= er&cGc,Hwpl4}7!Ҡt?Ą Sƀ莇S%}= TI4 w>"Q ɵ*_)j&?p[ɖL@W<*!\ݗkbȮqK&i%&psZf@w%x91nkyV󥽇 hhF  AH +ֹ+ [MYJR!{OkpEɗ tUI+<@%j]<KwNe%sB ڃMtI8q3zmq c i99Lj5QJ؄i:)؂xkAYً$T%Qy15u?,~:] pi9֙4ngߡ4@9Fn*:n ?@'n(`]>yS.*% /Ǎ#IF;] $<,ly1-KɾGYrE"Ȍ#3 8P $Z4>bd QSbI eڬ ֳe('T¬r IK &,D㹋EFz-/:hsD}_nh#kMDSGv7fw\;4pnW<#Ihε-TeZkȵ f)?JHs=ۍ}v-=B@㫜pns!ҒUU:[+aEs1bȁI=Bix\ۨUNc hzf3~^xQ\w@8!GQ{JL!XTe.U#)=]Jt7T)p'vu?iAzuu?1Dv㵺eVm-qS(ҁK#r/# I d!2ŘDIJ`50P==99|=H> ,=h%~+F7.rg=×\!;x_5R -y=;|@[q͎_,b9^ELJ(6pGhaEx;-{?c-ypÉT0f\s-kx+4 @rclJHBBMP>GLƒl!u7=]A7*9('H7C(É=%ˣB;$_ -Z'YmPDhӒG7ۏXf1,鎡B)Oz 8P 5t Hánj}+?4IXɏ܍_ d܄}^l`@mU=[njn_w)QzQj wާ[ `K3& 1^A c^ bkۧFO&7v%kU޻ ޿yt2o[-C- yzW.amRo+iI&Ŋ֔x%O |Pv3VHK&9璑^!d]HEVM+UvAςխ5Rzӑԝ A&C-$-(C q>_` KdIL.LJ[%SDLZ&;4uB.Ro9*[+哭X" Oޞi5٤zs~nW9"w,U,$%Y $8m!c`Ȍ.RxڞJex(^,c"Fvr++)$4DBp'g%lm#-52>>B\{AToWF i2 &wT8T.!JE堉M- 2}1@öԺd`ts@ uЅD5tp߂,NQ/fяO N,{K* cKR~Ji列\,o胿9gg43v-{?O,"\<|-h êZf)S`N/NSr+k.XJi^o3Ww2?^N{$^|"T>?_Ug5r!>|A|x)KBn9)_}r=V# '"N~ӓ<wǛֺ3B듋gLd拿ܵ4ArTW>+bp9 +ڹCKLlx\nx qS)K-Mʠ⌕r!l̤x-'s,܉$!&'Z@J\v0eG|WKjߺOͤɘ/7~: ձ'b@<_Z׀7q φOmXsԒmtӆL89֫1ќ f92|UI aKF[#f{bv]WlWӭ32Q 82δK{?! U<̹꼹׫3 je v aSCP@LZe 'f-MkpY*kA &RT;} 7^ {FmH=Z&4'< +WxFˍiJܻird^\\ͮ?/ҋ!YkO'ūjԣm&/j8u7e6&#t1y$zҒN'a=@ $ 饢L%:<>h%{ŗ/ѐNbzZAPh- A2AJ5A.4I;AE@ZʴJTQ;6 cjb)TP5i.$+9$DHy[&Hz&"0ڟܻ1䂈 RMm"/vZ%'I3˲⌓srq]e#젘hmy# YZMeW;e\Y!Ѷ 1E}q 7d'\/ɦX94ơhTv5I&#vc,Za 8%, 2l"(\fcZSgF@[hnMmoN I)Y6'$b-Wn^:OO׋oUΛ\Um&7{|NjznKF><^lM={.i|8Q QCGgSC9iHOlr&rԘkn.{'jZ¾m]{fw=㫷uXxq ;qjz|bpn=ܲYul6&Nc!Z5\wmݕOdt]9GI[a=6ユ'߇+$ #)ZW1ޱN [N oN^Un lBtC@Х#V7}Z]gn͜PjP(`}ywtS".fO9hkbޖn+na` <Lt`⤹7 -ڐu/5}[=4$hUFm ׋[ ~mDzLȩR*iQ9MYߡω%:,{sfe欔ea2'ƀ:턷}k R;1%B0$ Q2= #3F+D 氣(3}1*i Yyǵ.X QǬ3x_9%vr.),n=2i!D H$d3G04"^x,Ș"CO2WV,nqБdXߡ(%'зZ9aO7k{}F #U]i $8[fP$QpZZO>晬l bPbIy} yd IzKB6pzIzNZ瑵ZI:vRM~u DKƤ=\!E ,+&0,R&+|}*' E %`V1˻OXIZj M0QG2> 2 5dƳIҳ!"C3{ȑd.]DHETt2K&hɧ"#B{&9r4SGt2Qc3˕]HZAF|k$JH%KE%G ~Ƨd͞y.LK4@R'A2IRfIGkL\]PVcFy21΂ ӂ$ %AV$5YCmd1ږl56]b{تu~kܾ?d1nH*hПP| }װ_P`jdžZրl؝5,,@ܴ"V^}\ex[w~/-`5n٧hGD`fBDU0V X+袲^m+;Pdص;]6q#ҧm(WW[~l0Fo!)g[ )jH% 13E,YbO7^~ zZ]$)a;豐II$yp3(c_? % Y40k8=o&C&BY!AntgͲ{ԒNғ/yEx,5U m%F.DW ٢wr-xJ%\$* .F|v)\ue?(Ӗxbfd7ղ_87m}>-xO=ZbZ XeϔUL<(nx2 a5+[kSjIҤ+95tQďzkP3@\HFVYjO%gpUCٞ[2R)!9u>" @hLtVgEL8 pp2e3tEX\S 0gLF2oH0Sp^8+ KK *p`3FtӾDY25nv`PzPL58ڄ1w\{0Ѣ[ڌ1"q!e$GHE!yhj?ui6SĚR>CpK<TQ|Ry.KE{-lp5 E!@hXSI~Ũٗid$ه(,>ADuj%4x!w/iRJK9s XT 6&9 ` ٭縺Q44mJ\ҜQ0ϊ'>{~BK„6ːQz֬S3p4%uSZ!oFcs?Xjwﶤ+p[tQpNX2߭vP>)a:yonGx=#:swswlIĨ۟.q_A;}7aѣWuZRgk>(,OKSvkuwӘPf}--4r;ei=KSüԒpͲ P٭J {-}G,TAo[Q`ֆp=ئT!]_y{/jdP|FQg&JW`ֻ@{ͺͺF͚253m}U΃oGޕZV9VqbFt,?z~wm г !\CWWOw^/vS7!a:MfGoC(sUFjeNQeG)'q}#Aq %H-[~1;TE4N@̌.`.ӾScw"耇XH!i¾/㟞K D)2=A$zM/pnT3% cN9i!GA.'5= Q^HrKҎܕo4,mkg&ΡDBt)#NݎUv3Ew|IIEs4C++>hwreZ糅1ovvд/xQ㛲D1g\>h`5t\ a=r22E5è"bVb+RH@KLAu!ӣYa7(ReM NhC^m 8\0Ḕ`FFCq{CcQ1O{}$I rI)p=dxH 3ԎČ"`U h j6sb"9AhtQ@(nQmV,!P1q6C9Ը1ق%(; 󡑽X䧙>*;S6Li҂=-,*)TX6JP` eyA" J*9CdT6*xFLhK5fåāz@:XjTNˆmAWJ1k%w4Z9:Ue>1k y&eSF ~Nޙ70-}G[g]n;٭ y&eS퓁COios$.9 5ő5Co^HJ9Jy={xJwAxw܍U'_Xl6 Ƈ r5^$M {'[[{vXb7WAn}Hҗ7̋ Wc?lrȀ 6"; ֤3gUMKwp1TB$:JMzl_ʯOzx_~K_a@yjzFhb{UvuuV8h QG:`Aݫ/{W⭏fŶՃWp3 X/F[ih4Cw'}6]52U _ J ͦvGސ\QЭz)82$;%b~:Dן(Ӯkv<ԌC01Lk9-]^~-T ېlpBxB=nfa#.p(GКXz!۷oi+hZn>\kqI"(YǥG4dCRuK{Ee {TY9ՒdFS?8bfp=?RvQ-sluDQh&~**v-av/ՙ;O\eOkQ9"oAmMRG%0uۃBSAMScðcOU} (sU/\r<R*1 yyGu'z~q0LP)AWU0f9 :4L㕸k'%ѺkYW5j$剒sihEWV~Q Ē9~4xjU JPbA) h!"^r"COw3~-xy}w rTY4)5͝[]&=K4k(ncѬxr4X/F[Ѭ 4j1}=pws@4k*5]h]B-DnY -@Ga޵%%'1¿.7oؤM( ̜PcQkZ1!=E5M0ί ]#SMG7 vm>\bha)$iICas қ^pa'?dZWD5sE\c7I5v!zh@ʺv?xuƇ}ke)#.qf&(!GL˸mq/)&tF7xv3{k &;` :!=1~IVe:EL"<I~~:Ζ6آ9t 4t5qNw8^nO8,H*nCNniߗN$mo0[+>z`XɳY'&w`.heYm=hsi垴\tQ\Kco\ ݎnG1s!Bv^d.'ūի_W)-Z;BT.|cOiY,?ݻQVkarys||EDF(pf\e9X/T!ξX-<]QߜiE%-"x:o9(6p4`g^2HiTƢɨ/!=~swP&3ct淡#hA b\В-՞>hR舡P:!ԀeH9vH۳L,DZ*B2B gўsNv dcࠓA:D+! TrQo+JF83ĪP(*{*R;" E L$](%8ωQ e͖j[fh\jS#-q8#ZXD+/We)D%)ђPptH_FjÙBpڴGYە:n=D90&+UWXOF:e) Ti'J5+S. >q"  V35Ah&>:gtиZJxqC U4>@QVH cbYq/L'rk]EK zrE __u{3N>{e ޱX{1)nIBFI$!վ}-'t\w5;% PI?l;cg2J+ f NW?Np>W.c> 3clsӏoFd:l䇻ǚHKGWH(9]O9n Ѭb;saL ڿ]_c[cuޤJi7"h.h3>{tiJnL45ce1-11}b+!sj=w^/>[|f1nbHՠ[~ɎzvegX*ݜ~y w?]Xf- nZOcd(o h\ǐ%E gXZ|oȎ>$5[+>%ᐳ`Kvags 2W=KjrfhTQ>_@S C X0 X@#($XJjʓV7뫫"Šzf96[U6(&2LHŒ\n.C F12k[ kL4),SRfqr;(N9wQBhVsnwb/c?zܾt9$3W9X~3˕ϼWkn2큜b'and9׾V--ڀ8=aղ cUAwpWk2xz!M_A̮#;{V1ϡA̺9씌nQ%l;Wg>Ǥ@e#c̴ȥ{g',I^nnYb+4 ihluT6^dG0)Ecw.[m*mlpqZܿx~5Z0+%S#㼉8>Db" D"=බ f,C쵶 vZCR.*KMse2Mb(tXgge*L[oA5 OL+L>x6`!%/Ԗ@Nyϔw+82/#T GXt詥Zi<E jbCP5[?fx 5Ү];I:0O&MmQQ 7] Jk; оėD2H 0+"r앗!"65?pC0 YId"8I혍)7k#ܧ*m$D@S"k@O`4jɉ\F8m@ /@q9l=s:a`݀q^iuinH Zzwd߭ K+uRsM)J`F#McP[ V珝=|4\D}i}(,>^lPcpΛJZ??Ƙ,|b#;#4͑gwF5?NV}b-w̑T(L޳hC$Q=BjcI(Uf+3Vj%E 5ނJz6!JVL\R; MDLøGûLw^L̔ޥdNp6v|;}$YőʔB\U)i@~XOOM@C1 PԓMW+Ɒ;L Eo٩3ӗIf_I(Q/~Kh؇i2ݿh6[DtDDU2U ys]ڟr/_4=,VlxI1E`)Wt`j BN WezxXڼݩKJ  _:^"b[ (V`TJa%O3xZc$WGYl=aQ.B`a|7Qy#fS,(e(J%VMl@>hXF v50XXQT1pߺ1(8mE4U;a2V/A?>~$Z/#>^x>.f+ݥZwpKʑ@|sщx2\EuO<+ ܟ\m>O4$H0-奬cCg({T S_, Δ|$"/)lS<[^Ms>?o#H)-?}7~g;]x/QNjIoqTJp4g8($"z/WTGqV4!=S",$3"pbcT WdwD8)1_(xKB O?Qu?{悚ޑ: c4ا0 @ `?ޟds)TK!I$:[d UI5lK-A3^RAݟevƅUh;Yq;|qujdKM?(=(,\-}RJq;' ޝlj6 PrUj JGyT0b8 3#T!x ʱZk5/Cj5޾+' cSaO Y(ϝ_"ǜטɐ5Z' 8Ll/&1wڕŐuO?YpǕ{>tyرeb%81f(V73Nt3C Y[YX0 p^Q{Nby8ܐm%)l&[:+۠ILh_n O;E%CCe;hV諰%,M',5"G)pyK&W]]v4e. '\ж_:ފ`@$JIn7|s6RK`Vz;XMpsɜ]L̙9)c1!.tF|${oCJLqu /5bf5s+0'2o0bRml`i,H-WʏLqP CW+L%hCfLefElAB;?~/uMY aˋ2S+d=YZJ oJj󩭉h)Ӌ~R]-C5774’B":urq![Oa:KU G5SEBO/6DrFX֕/qbzםO X0[^TO_q`,u0JOI$J*2,E0JZЌV{vvnH:Nze-T,jE5fsk[[s( nr} /AhѦnWs?"J!)5!*5n)u.&[_l1\= bbO;Act3k2stV[=p n9o9^.bE G>J$ϧLsP˔-nMV%>L"r(X{%TDs煎-+[5r3`UL^REd,0wwi_?v<ҬJًS~wq56+w;>ܗo/ '?DxJkwSnMU/'Qo10% 8Ȭqvƴх5E#Sa6XsNa22RuY&^\wo,\y>=HKo0ⷮ[b0 Iq`ʣouLU"OJ^dHJi҃THp(jeVcՊ".Z51qpDQũ1uڹHB4ׅa2 , @N!QIA1hK>Q{Ւu"f=i< Zx,12\zc ,Zl "2>g {Y`ЖW-YG-RO7ƒ8bu[ϜF'/H4:8D47lj:b\U:T guSn*MYlԎJA;&)`ZH- LW_z(M]/=?1Er~gM=~r~ Af6NjKǬ;Z7uGh[펶f9ބ`r, (+̓NQ_{L X`dN*ع w /ld܄)M(-ٵ1ڰfRXfAAX$CQ[ GF8X.tQ;Qj 0%yq7$K75}馦/[m]IB*(!2jNF aNbM_ -CMJDwRp y$ 8\nN \!yI̒b[ZbiQ@H`$HK)ZRC(XXӂGC51BB0xYF3 L؉8T)7qĐSE3话/3|5_|Vo&0̇{w\Ofܧxrs_v:emormxċo;?Cg[&ƗS!/Bq>3]Cc {5ч:.mͬbmT5]eO =6Rj.ۦ Wb;F.y|~FGGFN ecd7>nns 49cdUՏLsU)F:a\2/ݐiwkΐheil9BZя\Q-B[|RCT5ju]=L5y̚+_ޔs,B;N;# IʴJ1"1`f~B'~<xӧeFY[ Qd܀N>R{arTM])B =}B4:>˥hy*-d/K&ta{\gj++>?z3f8鷲{0D^ۻ^7qDaT(.ւck~5-P52*_ ~QKdZb?"BAy.ϊ"%VcE"D[D< Ŋ+s!e=:uFmEsom~Ķ.nsV o!P%0օfH88[QgXQ?WÎl$Ps0)t)1YS4 S8xпveL;ͦEoz3^N%O5&jM7E ,K&:5@X",r)Q!M%>THSAvK4BT] i0pSwiCl)5+*?IJa9kK$.%]I5^TnC3+TH'(\ VkĚ&γ r3PO[ yv[vh\05D/:;GNH0tb4MBLZrϾŏ_~ $"JqD?GE[Q$E[Q$V+(Oo;0y$j`2߇1xK{Wތ>ځ+3 nuỘUc;ұ$˸CL4)%T ITe߿_26ynq̐,DJD.PZ] `eu I@80HQUT`,20ǍBmxPWtKpgybj2fƎ'X8b/)Bd4vFfV*aOZKQHSRW"Fu  R AX!-93.Rm\{ԲRӁUCU{*րJ3Bi陛sw_nFƺ`Ӛ/ӫ;~ ?opM3@ǯ/Kp7!1/""A$) ?޿9I^_8 "7=37ef0EIGi!!;wXKJ 07}Dq==@I!ݼR%W%Q"| 6+)3!4q3XR9{|Jr2J؁ \_/KU `$/$,`o;߻rٌ=釋`ff\wyB.(Ќ>FJ (zeA0̊MGAxKL#i!`}0,8% p1%|Xդo2}K̙*ٳsZ3nsZ EOǍ ;C%*vW :H1bt_cFE6m]J(-,N>(nkSR+4Lx  O1ϵ $XXsֺTU@WJ 3$sV V'VA)8ȥ,ZNX)4;OrKFk9.$ a= b3<+bjLP.ďRO `Vhb]W€fi5NDtO3v+0"p泸´Y#uH(P͜&k8iY0<Ɇ1VYqU@QuR7YWDG}%>%09 Tk(A8 s&{~SC-n'$;;j%GslF5ddúLxV"b4ֱ(? !|7lLUֿ!"2jRfmpy&iH`Ozr$5(GI0-ߟ-;u')фVN'#||g}k:0{ڷ? ޜH;oW>?>^^(ŞUJJPH}|,9JFxqfem3vAA& T FioAs)%)"872[!VVaSkD[TB,j=5$\pJA!jg+Zb ky)AA*S)1lYY]ʐ8\c&P8ⴵ)` jrCO)4ՠ ygze% mg`LN0ׇNJԣp\ |C߂.·Nڊ ڳW[3}G6{k[᠛CHOOYEp1-{{׽5}[#þW>ļ=\Vc>~d?1 `E}iMgclc6aQx m*ytA+=!/R+%DiCz,f=3.n(}z4 T*r0&x7 X7ٻGn$Wz1U ]n{cWw낤n;`JURI-f23n(_d)oer\6lz@}n.\R}iK*q~KaR)Cceg5(|*čU=l>];sJzB-&r L9<`tmd{\ޮϞ#VZ`a1\=1hJ8&8sҋ`f@T -%}e;%M?szK  QCB~8%dT!%9i ?y|R S8Tf sdޮveh4Ii!QXn)k!7* +PQZVmp<5+oDST+E{Cܑ[u5>_d׿#I.#5Q{z?4Cx>iA?aoyLQ h:m<ۏW7}9j#rwg/ CW}U.Oݼ{*trNI7GBͮۍ}?΍Bt?vEnno}!͔DRO(.(%_m!}:Ec17& "d![◒i5ӣ )'AŽom}?aQ_=e:7sR>eACK[ѤbŞVC0^5fY}uUaq DOd%gLZϽhJqէ+g_Wq̍ >đRs6,Νg Rj-'g~2ON y͟Hh4P%ӆ'x)8R;}Dү,1eȏF@~Ջz";^3 g]Oc%xC֪7TIs㝛yy%*<4h^ sޛ GUYM7YķueX,Z(z:B~OFK!UF{RMpz<^Bi#nVy?0M:Q6o5D]nV-זop}[0y<Ď%/\w:;ܺ,}˧(z۪lk925/"sbȩQn|O: !׼I0ֻ߬nm,1a ןeL$N& OSDR2deyIݚ!翆?8횽Oi&{/Q0PטFXHŌ<4Pf Կ X%x<͇*ٿ$tՄT|yفzu,OYZptCω?pH#e[Xk'ǫƟ(Rfx@ !߆OIEL>TѢQ=f4&itzty'xUJkSSƑB\cyt?l'=A5]ar ܾA/S:7Cf36-զjc<ÔWkABNo7z:9/5LiDH$n7ad`"$tXpL/H`I=T 7~H4QǠ bݽ@U- 4 &haR?(&| -?ҙ,?JO>$&1#y{)7MWmЌ%9Ha$] m84 EErȆT];vP^zS"sBJ!2ʳiܖUCQJ@* m,R, [iGP&;Xoy08a+lB|BQՒq _[~Z.SX$r6;2$Q,? TX#L(ӞZT$(#SJfZ3_fyJsT5N2Z*iqAz(8bhk%Lfɣn%ѰjooTAZܓ40TR+R&~"j+qWăB˅yݏmero3~1o$S#t;wRf(4HfFFB1QDZ1P M4'>((~m9ne^X+#+I.W}Zho\W>yֺANр`Nc<**@sK -(MՔh+HѬ$A|Jhj `4 FF5?91υqfnP\+Fn*jxYV%s_n1oБ 4f4ՙD9pc<@G楐 og}4fxNp@nÍ~(NYFΠv$TRGN[5 \7$j;=ZA~9qQ;e}E}+2F2KC2:# ڒcgE5e>̮nӇZ>(VQ G7-mIPʜ ((k.j5@$9s*#$2CKV%Qi˨ :Gν%: X$TD ³hMP т[kEhq&rQ^J{Q^(ԼX]o68J_hNZ0Q> Q XB',F )SSqȨB }( ̠}JGB%p }6gK$:Tɣnq>cpM_3|fWū+j3f3DϋK9eʣc܁B lD j@9PxZR ,SWƶ wOKM$Kn߇zy:P5\՞ _-YOiz݉*'4͙=?Wkw5ox?>_U}ofwqrbD$a\Gnb(hfQ Z_7)- p7SnaJĎP:Y\W-=j]ER~x#~١賨6Z/T*uj}yM"&;ۇ>:sAkdoWcgۡ<9j5+ d_sBm^jt䒺N[Rq$n*z/24"nTW\ FR$zhq ixε>HF"Վs :(ŜF)A+P:ו3yX"@|kZ;VӄߤoVEl,b,˨niF" Hfcr#5 J;D±HӢҦRy{l\оkYy6f ${~[;'j:4y?{ܸq /EUzw7U$[^o" 3HL(^O@䀸Z^=_tt3;Nv%k.q2 H0R@("xuhE: Elz B=hD E$Caeڠ{P\*󻟾ӝ3s?5gXx7ּEtTCG!d(h/+:M5I|]h -(W\[Y0gEb߭(R 8ٲ]wv&5#V7$$_Ku TZp T`ZS) dZ,#ݾhyI^ο3엛 `(ކ|oUeEӃ)̦qZR{s=φx%޻WC@~59}pgOvw^Z`"0[{pg%i)F`]zF)%rg<= qɘV{~bHֵ~k}ewaQ!,jJkX~i:v0o]afzx[9I=IV@BGrakT݀EsaQ!(*'m})Ԍ?Qgv)ٗc4=S5j]lp\WlBBW Lq1AH ڳ$npn=ې$l[kϢ7 ΪgqYuV}jt7JAQh<>Qk ' f.8q<= /AGmto`|9?}ƣ`^\ٿ'{|`82_K} X f.wcCǧ`s4yr2Ldx Źex7%0Dfe֧;x1Kv].9qzxq]5L+nd vpxuͬn]?z_>˱Iγ/ j qp#8el/ Kl?*RUTgs k)gnf-@6_g=)eki{j%̵e2$(/HT1zm{X:Vkrqg'y&ݡMq9MKÅpH3#&0H9&Ύ>PX75YuG]HLvFB[ VbѤ[e9:rl|ތ\82O~_iV ^JY[{6O#I.OeƷ.A9'h9s^1{Q2=yz^ ƼZUX$i&T3҈ġW{) qȶ䣏niR!`p Ej,J$@ؘ P Β1aIJ1ebecEQLbڰRy#ʴ%^}*@g94}Zt{ݝ'>fAPהRz5*@!Hkb7T}2Lqgs=܂ݹ' Tp1 NIcz6q rO0N1S KAS#1PDQSk.fã`m9|K&҉r20@!#DgL~|$,o#򷴑/߶^l.qf2x|Z ~y'~יk6kV$Ee$ +L1Զ`e#|)^dO>\7ds_jnv?^N>D\#5Ճoh`GxG~(C]v꒡ŒzEuZ~.VM~a--%:mPnѫY`z:k|6lB=HwyܼXiײK{>kV1|^3>ZC94|v"ia( &,Dmvb =^^\Lib%!8PUl}q Fib7 lCr&3:al>d󺢨 b`߂\ 7j0fQ-Ӥڥ@Dpc8l<cQ?HbcXsńMmH sBX :cIJ O5o kI72ϑ"sX]{ s!|_Oh~ dlY /lxƛ7,)Q|k_T sScDt4!X PR\[F2Ƅ9փ|8p~OBNyr1 Z|EJ| 8nw>:n~|@a#eiEJv;BNx!Uj"m!ϑlZ~EmZ!t A}g`炠ؐȳAPXVYƦUJarjV5({9PFb@U[kՃ ]ӏ46$ q1ϊ/sTN)%FQj&TnOgwPgm=No6sKg!}|S{|S@%hbAtTfї~w}ƙnls>GjeQ Y'kx`%5䚶+EôvrZUSbKm9+mU~bZf-=\֞fC-iV3hIS[o߉]J(eg"2JEvD"~Dټmn2j+;64/'0椻e};g~2\VT(z涏o ]H͉$}߮G{휃{h蘅EmK{\kfGv𜵊 I'>6AЄ6TSoTҮߨSFX(X9ַ~y{F-UYPh*KPGŖӤ6?uE@7}Tmo6-v=,Pg˜&.hصy0gԩs$r~& 9 _"6]O:Zp̜g&iJؙք@ G޳s1Lu!fNQ3@JUIPTW֪D))>kHm(#ê'8a$ ,V0]n"cňs&l*ӄ E3~,͊:w_3+#Np' *Z?<"-yH ü@oW%4B=X;'w+Bmի{U.P3!~+$6o67 ը&5ꂶѕq"zT6SU^ *ׄ}iRƫ_6!.Fi;SPFyUP0R%ښ0\}z$E8rFfDxu%(QR2SDNkgT)MLcqccL$Ru_Trݗ,{ۛ\5eL\4<Φw~-jx9]Lb /3A.֙ݔ] Q! v9 h D]EK"?n> 6J*oNQ ӾZ%J3F#aT1e'H$se%MicPJ)DKJcR)Jb,H7dYSEʌ TC)>,CXqFZiJ9I9c;l4NyN85ൢ8Y<ѥ37\I- ,8mMvdjگR9#*0 j0U}_ +tCEpI=#gvtMÝ'/ KF^:;ZZ(.@d\Ϫz&;!L=`K&ЌU)3U1SZNXhb0Ʃ@V_gr "07}O OWu?af9f GZ[P&f<~OS5c`0kØe֘wwXzbH8 vc {"KIǦ xwG)&uʕ1H'{/sxA|ly3s8qC“pՆU5>bKi,ƽ8pY.V[ q+C ({أ*؁;2 n-?9PkJ唶Z^ʭdpK!'(RXtni5bwڸ3h;.^b*HIbǟKl= }\`ҿ_؇"D8KtPib!@e,ՏLm'7c*vkƵi11/ A+u|V19ZľU3Z= uRbB a9K7L= ) :rU9:8йZj ~ Owdt!ZK=ѕrϣ񢤌/u˹16_5^k_b 3T/Z,Ri7o$q`UJs/dY5ءOS5Eo((j<]DZEX op1V!copKS?R0s\k)i&LhFx̽픦Ro  :>1GNw <#i0;pўNmKUlegԍU~HٞlR$.m$TPrH $Hrvn<>\ @a DWV^ iDs7cΙ0ssDDjqBQ^XOMB ˖5KsH ;kթta.9D{3j qDpG)CRIugVb2af|S+"$ڪ`h\:z&N`#fbsN3s>,sPM#/a~hLO![n۪4S)v,}ǓSK\jH'NrIE$W з'ki0N)!>X.?pDqC*2Xȸ" `2fͳ ɼT 9h inɣTzR"_Z {bnP珻<%(*̪1bKX*Dẙ|2F@逩j p3 9S?wSi9(NEnjVtph.wX9xdQo)޼q";YsLEa3wOeЦm-JAUnF2+є͑̌Uhbl8| f`dž9plZN#N3b[b,ҏM@ Xepzp%U-  5ƴoﶅĈsa샃}ZճvH\wnu7`~11͇# RHoFUԎ! dv@FMx)UmˑRA_%Epa,Nj(buR-\sR̿ҵ"" p^H"F`N@=<0\frVȚbc1Jʢ9򉒪I>kCd4.U1LFJ.fT3E2j1bj/UvRo(VzrS /| ӭ??kI>{dY]=qO^G1/yM/J .~ޑwo\vi1Vofُ7wd{|}A(HDC'b'&?bUQ t?. .&f*~T]¨(iJ"aߟ^a`U t%;㗈0.)RbR(B H$@F},4[=t1+6 N^jTˈe`{qw20Z('ibh"ak9تϐh|v> |kaP.!z?M9ip v X2q2GT!0=<1[ӾcDŽEy'ky6.)iƒuE(>ճOe=M00rrц/f´דyX%dM4t&PxڕP(׵tf.U|#KÀ^?//Fˤ(Ui>ϒ]QsF3p\Z)SO~ R}7D-%1;KI)u[P3N"N +n 31 ERRq@<=.|Q@] @9Ƒ6bG |(s% %BQ]9ە(J>gbwgs0J?\ U[,RpVMXon4V{X/2IsSJ=L+ó҆M `Ǥ6=l,ӡ)CpMhSὰ0 \:-4Bs"D02aUv usZV MQo|;L>̌xA/o}AVdz.VT&i-"\hsfMXLX ;2 L.kR`V ϫ>&O.j3aT y`X**~LSdt؟gkR\s'XLY$(L3摗S]2>Euola~_keX 1 "g9+Qʝ^3Db-9Svb߿o1T1ee$q !bJ  tVNi3*(l JT.8RN(;u TjAf'K9EpovBQf'_2Z»Nĥ$< N DW4RP6;)c`%6ƚ5;a%to*He`)ږF!H%Tq]=D4Szޱ˾nĖa6eFtd(ϑ`_'Ź.Tf`{ aK.#ZO!f<7b- QvFHKBO齶2;&7vfi(RyJrlYP0w@87g^l="lon Y+<-‰i5ԥx}q̮noգ‚ Dftc֨ǵg7;^N7,.lK2=1P#~xkhfo > ] M|O6@LGtWju(P! I9fܫ *MNF 3,(ZesŒ Ƴ- B Y'0LQ_sSÙVݚvocCN:: c8fU&ȵXḓyne.7\iiN:ySk(r2Tf`ᅗ`µ8e `F,:?ӟpX\[_c}{(pX SLg"}taD)öiS8G+98VNlz,B"gT eGO5%@ @c$G{qkm. (d8e(8)f2Cs0 "FxjIkhjսXp@RU.hFXUKe (jR±I lj9{%\j2P+|'˙z+z452r~3tWI 1xO.f"aw޽QfnPnf^(7@t1f[GjQ"-Sm8X,5Fe5ƯE9ZQ|RvoO<>" gr-8>Ѣ7ћۇOY-?ǽvOao|D3o.'~꩗n>/U3ċ?O> gJj9Z_18QG-Ƹꛭ_-Q_~>8M_@f+7C]U_=8bneTnݓ!{9|jW}-?!ƤwOkU\:82 4%V1Q}%pۑ`hf&wڗڬ|C<JƣGgCC?UIu%9\4=܉^$Vv nÑ6~>'9 8 ջ8aeKۭk`>-U`mJza"f6DAo3@u TwfL,s31kA gb%G v2v9%g,đr4 s-Z[Kڪ<4'uy! |(+5khS1")GDd^IG*5˩ u4?9*WǺmIcz MS1i&MQ4Zh yS4eNq4=nGa[*1S'M)NWn]̐>E)0qfg!Xjܛ6ѳ"vZx;<7/A;ғ M{X/f>pW^]" d)T"HEH0\rVǚbXʉGH2'=rh7CR:k['6mt;;DLd_?&%Ғ({k7p`erKD$>ceȖLi.rLDXOO|LE]޴LpJ#V[e C)E٫՚ǏHr11P=ˎ>;3PǸ]()8o1KYOivLpFg{מE)b_ g0ImI4| s#Ul1^G]1C.Qc(+QbKDo©}y~ D[﨡n.@[UIb?UiH:bHjRu+VZr E]\WnZ4b]:\la!F %ݩu 9֔]?Sc/.NѽWb]Nh(U_uTܴ6ۯd8:Wϐ }w m{UڹrxvD?V\oj K+"w^k{P^PEk] 9zL~94PqQتfu\3>-1{S;>0 Oǖ#>a,?'y&@'*wP]/緀/|/2 !!d: \CVw+a54878bj& # B4wm,6.ˑ fy/x5y 6MRj*e*t^ǪW]%?<\hЂ1#/|oĘtn!nCݨM0r+r˅f9gJ ]o9W}ٝ#?%v`~A6ىf:ɞ-d%,/đ⯊UGU" Ii)J {B=hN6.l LD*$$-ӁJZxh%` ͜ufXaFql3Ep8^LtbU/W3%B!i@ދ )ikEPfD&'tKsPrpkS "N`#fbsN9G ʬNPlG^JN5'P1Ӵ XNJR M(笡v&KYZ|.mFGsH6}ϵT0ޙڅM`}>,PֆZt/l8]VR뛻Y>ˊgn0p +>Y[S HBX#X{92)Aqf Hư: eR0/ R,%JY0Q.N PC/8m)kٟ߼)}GsMYHU1=@UC|*SBm)K0P3D"a/MfR 7[q}-|ii8_<2ĘPI[.x@[qLyDr!ʩhV/K/ v__z6oG7/Ճzx&,VnBf(dD[.ϑrrfV]xV} ^2&|\Si!}RiDGaO<2'l@tE%A\]rQ%cB:\,TQj3୥3Jh+)rƩ4Xz,c-JZ*r2:mňg1p*5܆fJA9g\867!|5&TA+UvhҚ0uRhXHZyFX&;)SFsb2F42sk[TRdKaJ'@L2Qd淣N2`*wkFXWd"*2qńaXe&5<ϕR0%Q.̴PY5#Sࣂ qo/B(un|2'Fy[==r˥P]@{9- 훇ǥE ;su{>& ">}|=<_ܭ81|'WOKa:2X?__*'ǗGW߃ )!bcx饽oƪF`Ƕ}_)/sTxamM/r}e6ٶ]ߟ(()XO 7`3 δiqiB18<9 Lv`唠ZN@P[. E<wZUh)8V`5NYX!(1]ařF3,n<*=0q42Ιl"oa96 t5 0;hw [Z!1b8W1λɐz9:%0ĔCi0 Ʀ-Bzf[=b[٨$FnIGqq*Q`%Zs7w_rfOjG;DiW*KGgk:EC,/lr`,Rfo@5 e_[!!F"%D˓#>M^E.GR`$;A21zeY*@3~r1L[E̫:(fsEjRu2sm*tj"q52OcM"R\VY-K\rMHˤOرN >Ճ: ݆! i HW@ I$Cݬeo=(GUg 5tjF$._JA0b&!=aoʣ 1!5F!f>džïYn@xr@3d,Fꀼ0G$cAIE 'N7o[2 HTaK=իAVе+WkA Ȧ/fՄJm~dM=MkB(KEl0"@Zue뫡QEI2FRC@(!gHyփDP$ $=)0刴Vt>@LSӏq]lݐHgt[-y蚬\.q{]OG/?Ϯ/e 5s?{x2[6 nU^n=>9[LޝN:d\o[ƚe[ӝ^5O0kGŽ |s=tmCLcšmy{10TC:g{+eYuH2 y!g{s>% q2᭎w*"T%#.d tz/l.a0{x{^ţ |ǭqOa zs {|Q{c#XqƒuhND&0b=(yzcx9Ӥl0I+v+0n,*՚abSi(:Ű:鏻&=|C\ !" cclbq=T)Pq[#OD 1|Fuy\E_vI|EHH^ki5!91{mmm6mgd)z߭}QAd\3P˻/ 0A'4ui I'LK 0 mAlFj'ކ+4*,t..g_wbL""\S)ªl>H2sfT <2ܳ,g2WT2xSagurñW!tIz#S(WUVvn"U/~;%w~؂* bқ0K{|GnV;߭r>>.glYzg*&URrF\qlEeQ ) Rr8ʭq5$׊{gojGI7-,q ys[qa|}Ga9"0xCHhQ[xR`EΌ>$Q'G$džBQtRhڅ(Q{ʚ;--uF&Fm1ƹ5nzU"%]g_/&d-$l5rVOrwx`ۇ{xp K޾y~\z`Q4@ _}'pgn1AdR6#3|q`Gǡ¤!)siw?̮g/ܬPV,`qz|{tu=&O8 eNJFgK#62Rl$L%'J F\P *QfHzؔuH"ɩvTBWoch)%QJK±m8wyRr=sPPIxz‘$D ,R4yL{#ԁAZy "VAxeh F\ DR0*85|76F)IDD|O6ŗv(r"j8HIvaNRH䜣,BM/y*3V ̜:2B0ӝ9\-ꌵmW儑)#8vxA& N;U8nSkP;~ *p1WqvSS6#鿢򗻩]:x:UrIfk.S/rƖ}kPLIDٚI%Lͧ1&uxkL8^ CvVDaĩi3 ѭ$ECVV$D3AsHfȭ87K%ܰլuw]+qN\ :m0(U=NRN~TD9;]W s^kQ!w Pd0 33(9JQx[  ]($~&{2{rsVB!!`>qgF]SڂH΀qmc=׻ЭŮ V ~k^DDb9!\Wn tc7Ӡ~cHߢWXH/@dŻ"QR-smo@Y٩OkNۀUOt`&qfD1!&cgqs &&8D5a'Rq^oc??X{",)GSc/ٻ.h8%^Vwv4Kaxf`t?Sr$žJİk^|揣oR=Jbi|V7Eٚeǻ;;P?EanbDV/uq{1Q˥>.k?SRCMVr{X 23` ̞N &|L⊶E{0@HʻNx~ϛ& *.]#T3ѝ;uoK0b?meO}0iWW2)=t"u[1+>N4'+sQBLH ZjI& |O|W5rňm@8 J͖9=bmWSf( ҧC:?(êh@;.X]F_[P7^fm3BLҵk=4'ycn08]4 Be-W}6pio~ "t,,e:Yd"89%}&gI(%bow [T)mT-SD-ot)x"|n[ocВ/14nvDC^ 1}הF,[WSd/47{=MŔCX0NhBرWRZ(3Weq"<̓xs< I gpoo&]dPd&wni1\4d[X}HSۥR4Gg9-] g+d) d,fRw1 C%ѷ U١Iع, A_^j9 | I&FD!$*mb>XMvvL27Sȥ)(}xxCu˪u0\R~2kI29I!z(&kcEyTbMqx 1ň⸶o(rBfŴzN,UȂ0&eZc4BAP05"w!qԣsIxh3'fG(E`/3‹ sȺc0;dͷw&c~Q0ɚ ` zu%SCr~{kw_3YE8…WJC%_'/@1sdW07w_{$8"#vٓ#~JsERkقk527Mz4M>pI>v;<8I|3}H)&h ФǕp;^>'߳oM ?+]E{NYBTnh*an\9 Rk:(N+@WqYeHW.X?9)`ՉsBڌMڞ>xoJՄB!xFrvJ{6۟cm.hjسZKzYb%[l/S Dj\ ƌwsiWǥ8ZҨo0![2*]7qHj巙Cևh'~+xw/e .F7pHkR>Gm'z;Tё WΆ9G2iƾZ[H 3HD a@pP2Թu.AA"d =e;MV bΡnJJ)'KnۉZInɟ/o}h|E/ɼˢ%|e-5#_@1jXMP^wnE/_9/O\Ԁf= HԁA Txvu@+x32 /*XY2a M~T.2Gu[$R&u|z02 HӮ^axLuə/+ѭ,q$4L"991&2}: Z4b7|F gz0ײ Hd]K<4ߠ-yum3!JjBʉĔڅu I9ҏ0j" "Ֆ%4xQ('QPi >TO|Tޟ,{*Ɠk#XV$2smg=Kr GI,Tȏgn| - 0$l%A)@Ui [A'1Apr3rh0LCVPS8'x-~gB]*b)j&!rb@+ Jo/$rvP,"H`|",T%a ^e"*̇xw]eTI,h{%#s~Xs T yPvP%|R^&_>/m@3Dю)}Ɔ4 )Ap4{q>مUKQZ["Gek%/\a 7GGFx0-\od`+\J10QWcbEO{5{#jVP9qȓAOQaj*N'r2Xisz}hDz9ʣ,qQ+eR#c+!;r( ҪPYS znn7kh\*`-3 х-g8h9H"1䪨Of].Zѐ;B׏7A[%8i܊85sߍo޴z?zI i'sɨ<'"Fc< y+)UL#ItM+ =6XԦRY__xri@HQxO b6XJ}l[r_ ~a rT;{]hBcM@G<!-3;笔<12aA-!\9:$ϭ)0\{UlE~b9͗-ơ5kP /zww$W|$aHiԧ U$:MAn>c&^G5u FlIOn~&O @9zB˒iՂmѣ3hO%oKA gd._@-zxzG #=+A%ylqW$p@'P\ցjew F6%f"Ղ(sf9@4Aֻ X&[MRJa 0MMΟX8|Ѐ&@&u5^*"[tUhqB T%KQm!t^ov>{0 M#εĠMDa 'kDLrt1@%@Đ8ﴮ ZQ; jp—~ դjMxzO-RLdwycrMw7>D6[)MNXl ohP 0#*m o͠}6Twڨf^L~ͨMh0Zilu떋,iL+Q\*4X.ze(QB.=%tFU MAy $%ZOIw\A%er}i̚=kh k@ٲoVt"FP>4'햃hA~{swoOöWYO6sΔ}gb}f?آjQwuw[Duof?F? i7WE[G5ޮhY5m9ǜBo&LˏPOg^2Of:}Q0zӿYYZT/aVa8aGEL 6xkL[[ RDM=!{nDօrm#SPHJ!Ovd;] RT2b\z '~%0aK}f3~<֝mJ^AR}ʁ/WsqYp=vK]G~.a;Ldvd 2f`lYynϛNDWn~KQãG>2 Z*7xr{-( ݏk*WYJx2 w{M-Hz%Ͼ8wz ~uzDRApyfD!h 6d[X^:emoXԐ'q`{oDCDw.cfxIЯ:ؠ*ᳫiK:QC J7.R7DtԻ`cvtsw:Q؈òw^֡ 1t7%k T_裺Q(qaėY@"3F"ed@g ?;ɧ ǀݞ+_"9Amw]~^/N:-K`>O'(u/ٵÜ3J(- ݡHcne d_&޶`~]^Iie {)שu^Oi .ֻ`ZbZߡOJIh%@"渶 [! 9["DŽF+P!&y[r e-߹z9Y %/ u)=,1onTʼ*'3R40zM `0YhQ B0E a$ ]I @\Lb`=7N! 8*NSY@C7y©@0˵ ]^Dr~rԧ ㆙S#AGt\s{0GH#Oݔy:"FfR1bm,Ti\D#()CA>92B;V юס.H%INCkCۢ۱4ZR9]=hUKLJX ΊkHXh?"T3WL#bQ1uEK!\d>OQW: -NЬr͐ahAA&^PPsR V7)]b 8YuR)uN=`x*SD25?λuJp"Hq@h9)\XHGJ)&%t* ީxX4'K操bHv K$3HpsvS_e~UHۻn)ߊBi:ׂ"1|˨nn:jP/UAdzqaVt[R-U- ޮs^-OO3~Y뭷çIWnu>.l?I(ct]kq'@As˰|C}KY~̯VC2|2Fbj9Sb50Q3%[AdvQ$(]<.Wd^s{5\>z1y]켯^eRHɺlmJٳ)7xI' 'W9B#c]d$\3 )>"Ȩ\!k[Wo:fZ,|$o$)WEI6nv- \ZU< cuu <-F[U54|L\L6}>뵺wb0.mgvt;gB-YԌ%Ä7tB z띎^+z;*h-uF0!O&""UJ ! 0І"q bZؑF Ow&7?-v1#٣k<8JRyS#08wEU?S*dqQKD9/㐴-Zak~@JUd>_25dHt!;s6YHw@(C>*w Y6#xv;>-n8K!(zRg~96v(jӸJ' rS J(|hS5 \ަ>5&|`CH~`'*Ǝ;}ް<1jkpXſ눓tQ񌧠H Iu]V/l:ByN߁y? 9A<[_rϛ3zM+~錯~}6㋠K[y< 5'WK%Bep{BP(qB3:&JqN0ͼ]EPvd3P8.QZ!Z{'rd Ȁ9BhP@4l`FJh3.0q2ls?$h p4_ qZsH1c(Pa $@XalC0T`vxbZB唨PVbZfi!>X=B/'NP c qKm )%g86AKZ°6hjto3~( KXPfP*wBԬn9K:x8y\mZoۿ}ҫ LbɶAo^o>.X-mO{ĦM㪬?}w~XKWWS8-'έZ,~YRs_<%[Υ'wׂNw>&BoM`~~nN|HՈuǸk,.~N#iБBw'HJJsA-@95neIWI`Ki>T[[$OI-Oxx4yIzgjZ bMλC~g+|Jw=-F[^pb_ mjf~`+-SM88oTKh[ʻ|4?d2z9 gXf֖m\r̩8kkr??p}IRq呪G]p[س~6֡5O6,J8OSNN=R!۸XErv~G{k'Ayys9@m86Uǝ*22}sv=`"YJ;rJ1M\ӂ"0.kK3 RF`(q(E @eN*dqQgǵ QNq,4z9@3j”`͜@@h- JB@MP YMw@цm,5JGK)|X9㛋A !^3xF}pl_9۰Dl*j]89xvav trQgvre1f׳[~?٭ MMImϗ  $\<'85q9j}M[n1_C[hK $ V|cnŃNsBN+ySir^ »Ǡr̘KBI\F+KXl|HCL(ilf ^_LE&xfuJ;v.N-YVL '?ΊV罫l}W_r2* Rџ޺*֨BRabJl `܏hN%C(~yz)zz)7},~!$+1b)Ĵ^Eg84n:M"0͈t _`bbHn&9nQztr:\RɅwL"-L2b]_5!OkucOk $O4ĜZ#Aw58R0xƨ lݠj5mrjw4*)=4|}p)i]B'5D5 #zįrRci=Ebr2x/UCKT:ǖhLBXmMa9P"Ѭmd}ޝO_|PR2NB^'$X5L?{Wܸ C/cR݇"ezaL':,%lN,R`SvG2++dmb)ŢIA=H>],zF)p;Y|~jf7Bz,>L V5!j5xP 4^k{$P0DxuYhŎ S-3̊iK⢈DJxWQ9, ߎӢ%M;nS1 8W[; ЕVaPVjAlcuN WxR FC.uyӉvK-NCiD vpڀq(J:HkM\b U 9Fff<2x8*OJH_B-hN8c)z{v>\hۅvs\%m%ERD۝۴GxVrL*oHA aM3Djaf~hHنnSfeʄjx5!B\؎h ZG4qXҖ\ G>GдxYV"E1eH%S<^noS 80eY˂Pj(5clycӔAM[eXB^aZ^}w*:adyXʨ{ xZbףrF6ιy( ;oXlv #?#πm:6M@CK$; J9 WAw@ ݱ_oErgn㢴%-aǢcC9ظ׫}KT;.IփPCPP aCE<@I.s"LVn0* A$iM{dHtJ`64)&PoݦJ"a}@뛝t\-[c0j)~aJzBJs"Nxi!pzmFӇՅ#7i5`7.~7ӫIy*>gj싌gRʖ>3O ) >a,?G?D0ZH!h応N_[~6f7|QǍ8nq6nVw}!&,%%YTJn va@<s}-kb>(w*71rnfE*k~&.{'1{*km-f@JڞMs *t:8K),`=2E N"v]aE0?! ٲzE9`D`!%q!x@[]Z!peKX:2-13*gKNx>) (pq(곕ł#YsCUS@T:ϥhJB?Lr}v@Y>;w-+. Vsfp15rb 55eG̐JY QViJ[Ťj!vP!!;_SR#qKy%o)p)Zj`k`i. be4X$tqpS(n1}nnWW-HKTV@SiV{$:cqʀHjQPH(FՆh#mTLJq6_)Qrӌ#rlq`ZW|40DDHQ}{Mg??B~şB׋?ܚ9FgW gEJwK s[J2)kܔ9$cr/мsTkD<] gb1DjKAf㞛r0WlV^T@P\ԚߎH0wXBil*kT"?ຶME'Dd^9mբKM'v4xjaU8\Ո>=Yht{oF 3qyVJAG^1W@ oTЬtLӹf[5݋뫫@"TK :VNCp AD)X`xw mc/އ 5Tp5TuH4\$  O`< z 36ia[ kϵ {(Sc#Ov."2z WsWΉ E%<" L.)c4a9:e&UHdiBD)N~%s7x[b)//\@!ﺥ^ ^Z괗tq]bsoJB۽t* c7Wu4kFQ8]"Fl4 UG|T E%io*W;l-t&o'nUɓ;?B#b!7Xڅ[U9acP}Ri/"+Amrb'U*QqUҘQcf9EƕUʓXz'`wL'(X:sM6` ўOqx8a{Ǽ; K,iIyǵ2H+ #BYjL(ʭ,穹- kR) : s;K!a4] O[Sb/u*uqD6\Xr|3G'W料U"j4X(L J(q3$FJ&UDhZ`"kUNx0E VJ82 E8^ .9@\)48ޒ($]` 1_YUQ4,Id[Orn9QL(a",!X SXkj4*{~xwKCj~*r-S¥嚼D rJ/9N .wB9TVd/u_ !+?^:Qѫ[SkTuɜ*wAOȪbT6J(SR(`,a,Xz4 F1,Ku4`z!k4i4dSyiOS,0U3r9|`_R,#9i8Re^K%` GS}F(c8%S|ՈVwSÔMDIKA*RU?EiӾMzB(ek*H` ;ͻo~x;_h>##|"gvS̼VR򫧊G|`fqx&W;ww|7a&Hء:?F4` `4=m6Y]ra MO+G9>MSK LJE nEUR ؽ]3ֲA 5^/o/o]-7Z {ok BW0 UaM8 ݑ@t@s^*_x$a+CuPo m5E~ ;n8wY naV"A;hm,lOVr.h-ƺ3lgv:yHN=Lu}!cJ_+tx|K&4`+4w LODzǰYx|۪ؽqf;mR'΢r!C1ꍮ?S`cȐWvDuVK|Ŋ+-`E˕tq-/Qqu*fR L[kR+zCg>6$HkF4^@)'/M僶;W֕si+&i+WbD[D$5w9o=HAvj5GUȄ${?!Og@ĄlL?Jun[½;~wO/d b X$z?_sg[-%máI39 9!ixgN=iwG#ԫy5bQZAa?Y[$QՊꛇ\Nշn/R[ o=Hllg#4mdogKZhⷀmF,L+Exe(zsFӇQՅHްo]\nWj5UK"/Z}K"PM _-$nFbuL8+}qW$Xifh hۉo>SOi8NqNtZB6RE!-+rKBx( "%R0%>hٷ,fm(r23b[y{qV~``U1{GKqI3fHc( -. sXf M5=v^㌥_n8eN9Zp`n)*RFKe\(6C +\)g>r28NQZSlLӠQZs * *kmH/{,=~L7A~f=IEdz&),cQ_UWUx[hH%-t>RYRDpUizZE"4+{d)-@P.Y9ag(4S',G(VjD)hQ,/8yRH8FsjrŬ} ixS-ʓ}wWe FF=~}GUԁ ZiAa[e31cl:HY"(j`p#ਲvRފs!k'ߞfIb5ލ~zB+}`l:h²EJ.R(fr#F̌R* I`i>|%dvW<\jiWj߇ʮOU+ǢQ~xɉ*Nf_Q*TDZdٱyzJz\†VM4V9G2ڑ ѻB 9T `꽑,W ؔ>GHfC)7N!ۯrNv(BJ,qE IA :'-5WR Ơ$15CAHۄb (׷z.w%Ԉ6o]qwQեXaya }T_3^q[O椠9mehC_"Ŵ-r{3#,qg?5%AFA Q JH6H%-;VLe {YlVPISi7V_2ChUҿw (Cm~W%>ER͞wf)͏NlU%ױeShem $Tɫc|W6ҕMOޕATWg=у sVC+q`=s:#94"9݅{ja3n9澂hsS]X9Zv8YA)Un g`MS!`FZf(pa&= 35: ~{,;rP*1- G^1m(K PCʳ)OAIAQAV 'kdÚgUB")'МSsД@*<1sc[#%% S UtPdj/{MqA7Uy ֛OqV!tU~ggMWY7*!L%W7WpըI&6 +y AYN(5L FWcU(UX$Af#yƁZ J NcOqH%L-WPP#0CTJu zJ"5Ў A9rFJ0B 1CE *(Y?7) üDpȹg\MJ73`S5ǃi(Rq,7bp.x~! "p [o7\K>gW1!R`aX /%I Hδ0r!)#9ID~1OeO)Jv [ӝJN4b4&P Tep9H(F} jpvm0T\ɽi9Q# NdY%Pe|3cS0 n4H`٨DK-7 iC ,"q;iP>A `X @>,[.Zk_TJ$A]ݾɊt0h>N+R4!y(^\P RWonOoO l˲W*qo Tx(cȐKěΪ'ĖEB֯WqH%Cjl!.Rʷ>Zv1z$ÀXF[fA*;otﶻ[jhTJTٻxn4C֟Z'^-5V L}xrD5kqojdPm->r# ls#T^<&.3%w ~3^2MFV:ך K[I<|W6'| l L"թ lyZq& n8eN˜Sa-a 1E0I* W9(}XB6 DPN =L n(;'Vx%p:,w+Ք}#6YlPx b"t묪vCf0|Hdnh &S 1 B\Iwd +r`WC[R H8q{1(0 u4}7`r}S8(E ^/~y}мGG)YR޽xVK , Q3ϔ.`94i{WfU,iYi龪jIY[+!,Ws-`GRN'Y\ۜhyHլ9Z@1$<9-K9Q3MMXgKH-iA$+P C}ttJbiPiz:P% ~GA% dԼB5$j.5dOyv E.zy H_+H*ջυ}tq5rcs7O'~#zy{GG:dQ^Y;w?ݛ.Nχ UGtCYyii #2CsB,lff:m^ܘ-/兰?|]9l満I:"d򇨳a2$9ri0!i, k(e{ޱFU:ڈsw%AL5?%,x]>(w>l5z|M_es+:>z$y<V ċC'n|NwT*-׀֕2mפ>q5Oq0JPW\o0Tt|n 2''ژD\)vis.QXEjq<$ j .V0qs@#N%串p?m ђU2S9Ӏп~XQy^pq_a JU\ygfO2nnnnj[OƏ8S 4|z[ 1W[Ff -=q\K%QynП_+h~ŏ]~)7fG 1Hgϓ@1PoeVs4\l^F76d%fCی׊3C'oH, _r/0⿈iaEK‰"کKBDձZ\6.hEsَ_t+;*VpVza+c;QzwF[tv^69dL5'ëxgfDVib_1Abۡ{_強 _ɿnEJJ{1d1A>y~Ě+Z&/ =xbQM.j\ .^RZ M[K3߫kkchKWkw^8.:w-©u4J1bg ^|޻^^F:aihsf_k["?C߮5nw=ܻwJѳ"VXA)=*uTH;ʘU`MLlNAҔ.ut HENA )"gzNq"R:ݫ%5b3/jtذ"G(dg3ZPOPRY1SIBHJ4i$RB`Q$'EcT8 w4{V"9RR 1aύp('a*tγi=4;*RK`Ofl T᚝T xHn=>[FxZ4׵*{&m.5z5S6*kx~ o7 K?i18Ⱥxz!?<=/*KQG>u| DDHV=]r"d:[Kj`fYgU;3,y\ <%r 4oDK~ K]⸣QB5\SӆEokZ$fRW%`JuzES T7LH˖7EÖLCEs;KAXEڡԵq q4մ/;$ѕTeHLҋ$K3wB/I ZC mڬ~^TȎӋbP[Jq4-Yi| *L~\1_M_-dJ6Pz?ZR:H}\Си{_Og$qJݸVXcRuӒL5tίMr[*[7nI6%-jXk>GIy3*F[K9 V%1<QK}GpAEesڰ7nI6U9~"$Kv̻]>cT *DSuc㍏haF{@"qM4ŦHdǭ-Qd-I}GviЩBytHֆqmlSB=R95xR8 gLxYkP T`#5kՉ|>JL;kѶ :ЬG*סppїػ6n%WTzٓ U~صR[ٸS. .*s߷!91pd)qٔ4htUR/_ү+&A!c*2qD˨I7d/C =~x F 8tѴu;x cMT3&|А?kV$Ew%/^Ԭ/_f5O|4X62=P}[dVv uv\VEfv_*DŽt]l;u uqbbk*Z\Ђ]5dcF%鹆/F[^ͮjY-S1R1R1RN]- !n%%cN☗^Sj4fdR+꘡Ȓi oO.{PVƏeM(.n =Hn]1lhq)MX۔7+}b7"Bse%RHp2fD.x2R D L:NvDZЊN9JlJ@s*.N&,xK`"I*R%+3"+ٌgd-X>K$|w\->ٜ+\ǩCgEJnz<<|8TI%+ % sF5yK[bD8̌aq)gЎ\@ |Wv1c-f>xĶa}iZ.]\C`-[m:)=[w H/k.89{2SlC/ fP+-HQ`D *)I{vŬ=B@O38aL5v2'GtVDΞ6UeGn*'jO!JEڡn:6 dA0.m­/gD3=%`"yx.|Yy!>䛵W 2TKߗ˪IX\60_j.H09o$ YxLXIp^rh1:ixLG"bҊ=5mR10J8U2%q% ,մ4XY%*u/۠jjGk[99[N+rom6H):,$sx Lj֯YQT3s Y$qS5I1Dp%eoHHVKl*8 xTiSpPMbESIq}|Bi'фK"9%WY!n)(jB L(m|)ׂj5S8\7lԈoޛceXsxVYE"*[SP33=nFYp}@:fX9ze:+en| sۮ$1j5D}tr- ABة2N]?XUojVH+ ﴇ}?nc_7&g>]Ni@۳mt;xxF)ͷl6u1.xt%)ȸ\0Ҹ n QzțIc㫑W=Տ'k HT2hb_YҠ$:gn@w xfz6&fX'ceX sx0JR"`{A!(mFFhJ_uAm [JK,VA9bldYOqsЫ4R` 5_ Tq&cE` F᝜ ADc~ܺS BO-(KtFF`o1U@)q Ns "%AD-{2hЍ ?}Ï7L}d9jWfV\]oW~b.ULQO'DCB+7;0Gd wkYF *fZJuCu5v㵿.޼Ko>4}~7O~=,?)890Ygrc$Zͅ\]u8V ,wX~0 9 |g@cN3!1 wm{>X3р Sl×]=Jw5S0qaS*Ş6ٖm `R`5xJ$ cTn'2D֋r(ǔ\+evuAC|Jq$5(]ϋm#O͗OМv].EuÍHhR "?Br:QOqe :MTxB8qx1 @H)^yk 847 ՖyXY(KKX,8RۍJc!"-)aL.KE\r+F0A6OK3%0D_h7׈}z5ϱ1ȏ{$Dӝ4RLi2izǃD5S!j˥sp.~}V k-/WsYa5KK`.#E$]-^`q[7`BWM`WŇW..L"*z{su Zc9Wl?>ørOzp~6&}uAS&MgI5z"xJ|/!̺RXԷw۽֬v 573PU E18u5B|{6rwVU6J|#%Ηzπ[$~V??A~ !6%|,nB8́^["}\ߐ6bZCJQW|Mȶ·)mHXtT-)tS0tscTf~mcTN'ɾl7a EئluD`dU!=M䔧-G)ei磻OB!bLS'H},eԆ(0FL%f=&:UQ"cˡH{pvaJ(Qvu G(31\>Ƚs*KG)x"KkЀ9GOpr>ɢ%H F%}kMjcf󏩍nuppGSVOhL]јpjѪ]m(#X 4 8Svs+o򜫞\u6wsN(2֝;d./|zI$[G j$O^8^UoFwD(/fb]T;Z @~b Tɩzb$T칔֗R ٟW'3~~ ɺmaFs@8nqWNU DAWP1V'[Nl#iZu_7S7D& C"" w.[ç QCI(7*%?l/`NUϒ nzVLP;}V]1!L=k$.&1 6/TO.A Fƌ u+l ׷ZFAQD${"-S1&vm5<:8w }Bޚc%pa3w~oaݤ's̾O(|NC+&BNG^Jtpj]B36N$>Sq-tS8۽YN[A@)y*gِ+ (M PZb+Six# G \PdJa`kXQ"ǩz\Bl~szͩ8s˜HΟSbΟS~{+@%;FJcQ"`'%ὧ1CzX[S6 Ms-Ȉ+5ICtH wF%4ے䌴=\H[!y)4Ui܍Hށc/uGdO*9W=*K /@K ~7'ɔSj&P\ 9Q L&Z>mCN9d[^3BnI)t#6ԁx[ Mp /!O/VRIB6:oӣ{a{do>R,UFk-~a_ kPw6nw#^Mk cr_?}=(֨vSdXO l8 ŀߥ%%!Ղ#dA4R8qruHT)U6)oVNEǹSd1kP\2YԊ8L{u2S;sfZ 3.Y5Hփ`-Lé]V Fe$ġ0PNgH$-wh,ǵ!wRjF!nLPtw媼ŗ݇KW۹[ǩFn_ŗ?_~rqͭ؛Mzy@QY}9A@V6G ˔VqT7?Ey%~H3"-c=nQ1햋AFh+rZsZ1Dta4+ eGQNc}=wR, 4IFrbO ==x'p`_U{\` 5@R^Q8Dt~ ^Νq 6b3()cSP}]Rd (,-J(Th4:*!KU+?౴hƷһ `kԜ⌱ |=4ƨ+;q.Kηu '\~33dfOXiDNk^җǩ[͛?|qŏ׿~DñZo}ve^tJYdq}&h;N]F c,_>&vݐV4$Є5m[coקv"8!̃V4>3kp^<(BT)|4}+V~]?z춒>c_})zNAUTw\ w ˬkR/O:1¥hREx,eTYvJr$*J-h ֋.ǵZ(#@>H,c"{uaeMa%oXSFY)%b_ Orr<>]x~ҰLO09FW9c1hrC592(.W$(HmD׀1vW Ii&fuϞ+OꦤIy Xk֐J&IUT/3^(5T[^lF…6&Pv^G:U!Wq(!Blb1|Q?ݍP #QO>`5?Tr4)tYtR ;*3}?㫬~Ʒ)eû##c<9o}ΜV m߆r 2ZTJ01]= ܣT?=WY8/xBhZS;e6:hnN̄j7i'WL(oN(v8E4wXH!fF3!!k7}THFO!5Or>mJJ/U%2dcFx{??WLĊC<R,܍'DSxϩ>f%{dӼa+{`?JdlŶuj!#p.,FZ KE5+)ɧ|/k^FUj2<ф-VD '69pI.-'R:$DZ4iy RZjʒRHx~A4'zbsA2 3G6%0ۣǎ-VdG+]G4 iE9Ixc#HEQF?q(G1=*0]hN As6;&A<~O_Zba$AkC5sIs!(0);;ѐQ֚u}'QMEo!bRK5@P+Fz>AE8Tz)ۤJtr2('"r)X Y"lVEn\NvmmlXVJ'5Lό EIP)ʣ# xb{]4e gT6laJ)Q\Ua^c!TNeNx}s1NV-0y8_3X h˻F)rh%B9)EKmim,Vhev?]``x MPkC%_S5@үC$H]F=FL{ˮȶX0R. | !Qg;I\Rxpx]x;]~akEA"-TꅬB܇B䅞 .qQԬC&K9}vbEߑ"me \f˨ e#j/bUV-Cg2F^UY Qb_*sDiz!VP_,h[]8)+NھabIXU(V+Q jT(+F.o=Ċ^bkKN6XFm)#GڋX6 bEuXѩm%Q,VP̩Ժ4jq@K.`929!5(-[Z$NK @Pۣ҉DP$hF7·Y!H`:kA%QU`Z~Oe# ]JG!)GbHVUkLEg F,M=rjZ՘d[;b\i%À+MQs9睖CXqz( ڄb 5Q+Vpy@8n[lύHfBYWD_QLBBJT==Q( UWF!2joe/q%O\| dD<4kIA 1N1yP*LKn=5M)Ê jaۋے]zg6IY(U2g&3!P&t-P+ԲvWQ"P4-%w*&y)}DIUDθܷ..s/̽\:: MIN9 ΅IE{}}}1F?kcSsq8 ATtc_,EaD= kO蜈rn{~u(\n5dT2Ou65[+lDm>*׭ #0lZJaKɛ@c^^NBU-x<2*KՋL2j(gUoP̋( ;gHo2USzo%Ԝ%GuׅR}\JiUǴ>꾽>QfZz6XFm=:A^ P >džOw](UT)ܳʺo9F>Le ]X=81N+h<_-jRC0B6An0w]Pm{ ǩT3=ipwWG4dS<'_m yb{qMx\Ku,Ty}w{{?u# ~I7VnRq%)8ru~!%"a8/TbbO7 y,+M}Z|Ӌiq.NH9l"e<GbS"NV~e\@m!v\O/6cUo sN k=iW)$ɴ(}ߞS詭YE.קS,N˽C"Wu!!>ea/9"3_7o+_?^M{ExE& dH1}H62?ͫwN,叭%J#Ƒ'Rn-pCi"n ؤ(\ꁍwɍz21 &ƲJ@/M%c@M!E44yU:pBiB)Kmᰣq/c3lr4:M’,L e%EE4UzrL-F E²4e,Z7$3JsQ,(4^ DhH([Hic²ќ-y5,Djs'@FŅu*p^(UΘBH {(D9w;l&诟a;|`~ }a[RM+O}x%K4i^LMʳ:SWh!( yATׁ4?R T@߂ʂU0mVeR-r""sՁk0h0OL"쌂|!ڸtk7?)X9RryUνoNvGEaf ;C}#^Gz3DQDyR#>Y<ޟeT[W~q/9E)+}3h(= H-,?2? [9@Aw,$OzbhS8^! " ޤ s>m>Y= 5U.:D[Ԯѫލ3];ڣwkA}Gﶿ+T[ޭ MMY>sͻOޣwkA}GU[{rLօ|&Mie=0ۋ|M'kG2y$H~5;V䵉hC ˨ٴOԸh!Smϩ ]-/ R`R"^=ǒyxoc8  _LW@K{vqKtb-pGHLtEȔW$F&e;ǬVFM`L8Te0+B K(/=+,Se, iHn7:8ޮIY=t?Zau"(*S"+(@ C)<O/h{L%D-Feּ(P gѥ۫/ߓK\>8I7]^/߼x]<;-^._29&/^u|)wU"cpAxcLNJ;haZ#(\5}iLmqj5ʤ"FYQ$ؗGof[蝛P$:uAYZjgA;W uХfQ:\a]( 4'JIh,+m` `7ӋShͬ_X#oJ_Eh2iJ.RYJOOVS|&Z2sIAОhhՈ~iUȅFJJ/3"ZOiÆ`]H5 >pYmobzF+Un[\f.Ht}SRIjM&3jgJppE)352k(uʛUR `Xp-P(Q'"B,%uJ.5gJL8(P{j'KQs E捎МV0[h!JHY])oH3%!24e&p@N`J]-E &5i6,T4pd͒*X rUj)u7#7gTR[^+EtDDȳ*!*3HD<(i)MQ=-ɪk LUh[z;ZH1 ?~%{O {*,9R%YNO16_-ºJ`{WStMUR':P%wa]%G3J66UqI'(LhM=NfL:ˌvך1oU^ 4)zl@S 54eOBeX>A Z\'hfQ l`6!eWxWZ:D6z FȄf)i_NuǷ;F S_S܌2mPF}V ZKA:$ΡT W}iN]0\(L!(j0fnb@)ηQ{&r=ކԞY2cK -Ӗ$ayԞP&r9=:6sHz=/h1gP{[GFiJ<[hRڙe㸲gIjFM}L;d9\5bZ&/4m撷Lh ?)p < hs.n5\3db2B3\p[Ϙdt3 wDjL{i/{S W!)|vm L*v/r(=H7$5~_R _?mDTa;+R k2^ykFFrzbFf]LeiӇr 'm`M)t'rN@3X{qK}:iHٻvÛPǷޠNu$^`JS8@p F(4ON [~u^kӤQD/ 2/Vxm'=k}!-/my[+ȕcj]a64zznfTy%sQ7ӖdpT;lӃkqP7h)#-ևb/T2fj9URaP 0m˸-7!ێx*7fbm⎹UJZmd&No!%cwVy%{0ч *ZYjS}ɪԠkLUԋr"'dy:9 W}ak? Q_(b?mHE{gXBqyxoLJyfvvd*5=הۏEwFKs<4!}QV[ͣ__ڧꃗz7J)؇Ln ;Bl_Odc~O,|d́6*_G!%w% !2!~snh?}3Gbn߾nkH_\Dm N8k7ytGOnh?Ӌ |2vxAEtC o0^zwC~\yfKfrW~<at4xok{J5yhe. [uTA+:2¶,}&í|:r:r֘<[;8Qn %N?7E)T5X##+3dgu˨>KQ#yw({ʛVU2,﻽N>>NjQּ?wD"4RHW `8"Mqc`-xpss"gaUGpJkhY<  |tgA{S#cgvixcLV'׾i)hNeS UzRm:29] UuCQhgtѷ$=lMKNj"6$)p&Fk T BCTHu/=}v9lZxta/ $) rF 0&Ei4d!%5A֚.<"U NyW[Rd+R!9c˱JpFBM8ZZ-P0H%(M8}ٵzف~y81" JPY?n$XEv$ީPP)N.cj0:,Ԛ%`h%}\ڢun!&o9X 4Ǩ>LN?T&Ij;P7 wc,>r,O6x3͔|f|#l meǑY9鮓H* EM Vn+XRL1Eu'1娝k^8#UBb|n`m)YS)'ݝ9VuZ~#IY j2gR]ʡݽd.DvųsN9ʘ)?ST)Q ;d%)gRhʠ&UDr n1YD$^B}:pKD⫎HР'sR&V7FmBx4U !O6Äekm}8v͢[Jg|?rPMQS,sw5A47!ՙ ֺ @vA `` v-;)ʃV)q}ǥL5 \iz/SP}&oĖRy`;t ᢙ$$6\Qnd5&ڇo]BdH'^~ʘ-d##1y^{S-Ib𓺣fnrKlM;<M'Oʵx[O4?dϴ5_x+ >taY7~)b)%k:TehХ٥gArFqҜ"JÃlgy8Jk7~vsZ't/isq p@rTx&àbR@ό0pU7MlJ}d-J}M+e5x԰:jdJwd5 mY,<Y.Aү:H*[֜z 3^.c]2͋A6?E}źϽ2Cs13}/D}TA}> {&`} RTQ_8QµnۣTje6ubuL~2z1ZaQ4| 3pp%]^GKt-蓐XaTҜ$rkK)RL^1\*icD {ұ,LC,g .HNLr}(!,T}uTA&P/XWkBlY-^G ; Fu(dXH|,G J~Pt)o+\Cc"#dF=Fxu ]rhB a5* ֺh%陝EIԌ7դhk5aA0'HZKi٧ WWbVAL6tp";']FkC$ -tkYA)cFZ.ڢW_ؙt+!OlY`5b, n7Urd内t:㊂t,P۾ ^9^BQ3Pּίb#Y)VMK* 94؜{69g_{F\h]rt߽eZ"o @1,|P_o|zQNРͣ^GY1 K'޿{\@Ÿcl{OS>bW̿_Q,_=ej5tS4) xG<_)+N Luxɏ`r6wJ'e}JllJ+tj!uvd map=ӕP_oftzuws=" b[{w=@^? ̈́qa?vr:zfGyԥ/aYD(QODkJe%g+fa8"ǃ@t R菳[Xda ȹ ?{M\?EN8i<)oݳ7VE}QYFqY Т(c01ZRS'3rz2+g([.(.ԉU&84eF,L^SJi2ۢXLJ35MFFXcv)Qq4YNjsva[kO*v*s'F}W}^pWaGF6y<_:NSvp_ xW-P3cjfF-.jjf,ϙ݂b,j%mR%R%s4&:Qbo\gY3u*p`Eŵ˂ggQ/S`&53W3aTХ^vDF Txe` QyksGX\5떶:SJl3Y)5 dtB=aϺJ%e} 'at /EMֈ YJf-&Z!kb4U?$ {nrԈ$Tpm! A[L[`әyW{~*(6Oy!*ՄH!n-'01מk[gPƴ5b|DBsg-[ӗ/5#]DէDq[BmU 5+e!:s7@8 `elɄq;snS8q4-e426'?%o΂=~:8]ks۶+9RwC'vt^2MHNeI%MzHYh]@4e\`n+]ٯʪQĐ(cU&5їVjU~9T#A2&:* j#Y(+ 8Vz1ŖRQNaI u_rF1A6'*^J.i#qAFDTCjW ub,BPwx,0N"tTDRdi!"m Eq1M IDAHqTxN$(9X o,dR+yݱ@h P`~ʤVavb>,C(trH-2Wm@SBxbgI<5#LfԫdԪX77s)?]8{DV ex ɹP|='J&p_2]fF9gFTxgfQ!@eJ.n(I0$|| >X\:YAɧl t06zL'w{Vf{˃WD2U͆Xhcɣ W+V<NM1'34|N{݌ VKZeI#9"K:|HSoKtj6ۡ7 '&ϺF$I**.B MRLp;q}Z̤ν5msK1&i{`cS g~@Ŝg%֊O=M C]Hy-#ao] ޞQXj񪄀::@o*!YZGqx,HA5&Qh0 5<Ɯf45yuwr.*(RUy./W- QG [_moPQ;l:1Voh9D#E^φ0UR#dXR 8+ UT'3c$+%ISəIP\odܠ?o|'Pu_tgjY,g.fiE`ޔa /?̒hif2 /zX*Z5uX>A Hp!ycr>((R'8H)M AZ!%*)IL9l(0H0FH_YEx0Y%3c̖[uju3zWhq75WQ׫O1a %Oٹ0b'BK#& 16T@d>K靀BcF؟ {%>VfGyG1pc#vTi Htd".NRZRD袖ֱUs dE>v򄗫A#j)S˛oz-P^I?_L|ʫ~ѝuy^" BOWW֢Xrev}`6<5+@]쑞/ZY#T2fgϓcDqJ?,c,ױϓK<r„D^]wCKQ H\,3T=5&:g@Wev;Fq3e1N^ J(19]=}8dړRM=]E J%*}Z2_h{e aSrdo 3sg{;~ˤWU먹Xi5w`*煂.Mŝ];# 49Mg5raf;LMlYω*׫_ 򻋯Gwq_}O0Ie 0g&^vnbBtfud_i2ދe](hDr0eG(\sLճؾ|ZgZ竟k*D tVbSӊr6%( $DݰsD5`_@:/bj*d*lzw թ"ԂSWRK!kkIJS>iPsאB[aT;]bP3I.5/dr惸}Wςt6y VK[ն:_W`6d_^md񕛪7d9JK:fj~b ]NF}PC7̳74_\Lt;Z <&&Q>~Wm(gxg(󽳆h-NqՍyuFE[=Q;Y09MVuϐ8EkqJBg nnuuCb.8Lhͺ?Giݚ`_'s@Ŭ*X7WrP>]!F=39FW#MZNl>ڭn3Q^gףatz[^yIĐb̜$jqHNe%K}_Zɾ[ˬ*ȧI:* ~ʤVT=`_7QOۛ>x\ޓ4"œTNXa#ZD 4F)Xa060)M0h WqLF}J*S pjKJOA@"#5>ms6B#5 {Dܮ/źaĪH͉UcJj:^beb*EKŕfH"%jRDIDq<\3ipB9%X Э0Eξ, g_>/GRQRpY&9{L1Rx9?ϲ#ZC֑kEqgjU/]U3j2^X-I*6X^l'׊nӮW\IiLL6bU6blD޲Q {N#mo&@VU"z_M_~V %Ub~| >x߅qp~|0hwՍAvwfyЛ"+(OU:7+83L`r>CDM\_Wpys?WV0,w8PoꛫyMrtq4\XR NKy&5YF<Z+B> 4-Y&@6sNkbҀ}.g{"FH"II2n UYAPUKeAhBz@/f[ϑ՛ZTL7/\%Y+4i 0 6`{Jrw#Ҏ)iM8o%fXt ywO}{ Bg;zj0SXh]Ȫ*aR㳩|9֚\xnj,]y=[,LJEc+})ү#~aŜGa1meQL6LCrF S3\S<[Z+g}r\klXaGDz2Ž{[*Кn[ Cg"K/CQgzVSu"e'M6hh8*s kcZJR|\צ)uyEO>:b*%ޮ.Oeq-:f[}=7/W/?_ k/`S0#y>i1!TG!vTy|srwR6B^9D{aʺލxޭ$6mu5jtJ[by"yFmA =g 6IBvPӷL2 $ 3gjL|Pt.K|g<21bڄ%uVXɏuR);kp/UlzfǖAl>Elkѓ5Z2, j:X+B-[QiK) &9+ ȅP}AdJv9FVRY-r.箕([7%ڎuCk3֍G*}k=A#`mgKe$> Nl~[iC{y/h n= xrS}ω}1,U0K3Ty+e<±vX"՞dLH#y1ѭxXAZYww%33#asGP@j{^A^jnY]J;i7 dU} C UfYRoRB\,DiY>Bru"F} cjZ|E>cd(5ZAi5 wJMSj/DծƥV?3^Jj˺} #4:RrR ҜzI&Ҥp1i7M.(lZ!Jmݴ5]fa=fkeU&H2#2e{YW'a jNVwj $G}(DVǞR#^23PǸ;)} }Sj\GL ي[CY>YwT|n6;L'/kc 4Yi.3E֤h3U X6A'SN }‘ɬT*D&gX`*XxznG\G RoIm=.U@2TA<,P$CUڪzeA2N*kz2NJjFj\Tʉvj}N Ҳkѫ碯8ԛlͲZUdґ+$CUb\T $Cf7 QyPc*T2TaE qB%CUa!U$\W ,PEuU%5%0*@v}Cl]߰UJDAT=DD{cE躅u09jȫl:9?M>_>l_<^aqk|h 6+Vog &}~; n}]oFWJX/ꢸmc$qK a,yh,3,(E4Ԧ(9y(y?h %p`,?~l!P95bf3IEe&kbdu6f٧Ui+k$.w-}kf yo1f'[iK흹[ؼ`yJ)͆wf5m 6܁K +s|&^t~.t2$~9E+o > ܑa/c{Dξ|o7ެ?WowMrHŽ(酚Q" U*"Io;d>.fTte_eKaʊ?%é@sPA9PӃ?Ye!Ü.")/?P:MU FzɻbmVhZ<+}Xt(TG;oAk#yGHc'w"XbNY?2mA(rF0n ]eۗez6Fzз[ zx:"{D cbTb"kBij'< Q"8JG*he{ 诋 ?IdcO.]v 0X M|),6g 柍,eT~.fNAlOrI_X̰Zhy_HSR[qOME%I#'q%'I`$ZJ RD԰P&jK6 ^Y(; f_tol2ɖ)˻d/+|)c˻<<S\=|~/@ DZ;1two;=Dz~@zb b4 ;b1.'[ӈ ɳ`?.#})V+>>eLIP]%.cx M_#;g97]wCaZK%0).'lGJG]\rsϑg ;;.d;㣅s}n7 !xmqX,7!Z5@;M_Po/5-= )g5pp5ypGi_ ѦL9xa+eWn0"a rkfgeHM,{Б68a^=Wܜ)bN_O_2=L0%SzgzgE~t*V-dDy,?QF٬r<{`[{N&wGԤo"9X^f8 ŐOf@$!/\D+ɔ7|[ ۘh<(Nڭz vCB^V)JZqnk758vѩ2턫 =vꐐ.bY?!6n]<,[Xqz=2}0~{{lT2(><;=vGkO|=K󬟳Qܳ$|Fkd=@^%}nڂ=NH5ffʯZ9]V+wRver3Q 7ZDlhOrFy* *vaJ R4n$4P7z{]-k˱_תn5 Ȇ,-F51w;%ZbgBeCX>RH]K`LIٯT, qmFxq؈ GCVqk08Se%N8i4Eo8ɊLeKMX`e`*9JPD2¤2$\I̸ u=:o,P+Vn, ʭ~RE*y+1bkܬTq'UXҎ/tJ6I࿐Z>oR/ Ɗ@$]f)KKyq!I)M=v%⪞_^_; M"ln(C5pzP;;٬R%ZPVBŢ/E% :^I#+xNr.`ezEV] * 3IxJUF*pRTᬿ6RTYLuRըT'UVlLR]ETeTk䔐褪 OnQSJ~RQ f{Z@[چ*is&_ I{RHT T3$QOpoTa'UjIURK,S.3$mp]~w"sPDQ%=i܅ 5F gȺ CzPZc-$*)UY/Fq6Υ̱8E]e%cj?5jf.I5*..tOR٨3|7/>NHN5~f.2Vdd lUlc5N 5ܭl^I% ^_߼fO㻬 pO C*O J!)r*H@"<$5A!'LPv%tkF# %ҏVhL̙VOVA+D'03_4+y )eH&z*My/E1f"n^9M|5YjRzGc4ݽ /p%Cup/6.{P%sfv",fБ)QNJtکJpT7q8߇oeU3]x #x\hp,- m9GTXh_QKi*DFHa SF13!DD$yUeC, GmxVklwVU` LʎDD"vBH%U5I-ᥩ=}T0@NUL ȪL6?N@em{ITi-f MJA͠4 QLҐ#ujRڦKLCPJdCa&ř1#ƥBc[ex2H} 1b S2IUNJPH*$ QĢ$AZ$@qʰ!Cx_!iYv?zkϒ<kٹ&ʎ˻d/*|%cʻ<),YCwڱk[BJmk-n#mGq;յr,p}+}wmSRr T rhۅzEsMe#RH3T6%Z*6>Ӱ}HfgL`O\fB<`w܌A_|(J,cRxf&Cln0O})T B`&^MNéq7|wo5dKz~7~shi8ίAhwV@Ka98Zu(qw+Ԋ: v'j`K8۠Ir2v:GjqbeN#$'d!*(m.{g{ytv $CJV} 4ḙ-p^GESr,>sJ2Љhג0䁻͕_jΟ֝sZRC{VN:jۨ%f~y2JP9pVJ^y T+Z<{z3&:H`:=&̦f:W ~Y Ǔ(Gjn |<})[v;r.Óe[c,b~Emb'~HʌS^u>>5IvW{ǭ܁2?)IB^V)M[or|PU偏T}GvBK!pcڭzvCB^V)Zqni7X6h<(NL%mLUꐐ.D$|/=V^LhgqLF /at|Oډ?_0x9x&ԸqefVD3OQʰ5w[Du/z_^ `{_TzPKiP߱Ta]j٦ǩlQ%t}SJy_ "1+I&qN,&'^;(˰kAy u+|6p>k\C؆[ԑ*ʽ6$nSϒ <˂BWAHxy"SJmۓǀ)f8j1Eza@I,*Df=izG!VJ rZ|dcFg ]"Ja/c /[Kj"9jOŗ"wj෤Uap80OkOi%ƨh#lgKaT+cI)L4xQq.< s 3)UXa I0eHE T?{׶#dd F z`b eoS]K)PRb*6`wJ Fos5Psl*S&1;^"]$p* }xBUD!YKkgȹZն1V W``ͬK*:pѭ&u4eelօ Hʚ+(nt+)|QV Ѝ^FvPo+sRKn6*}%aB uƂ,ly2 c:vU|I` wF l(KG)n(W3JዏagWhzݤw^7:3T\3Jͱknetǀ/QeMZw fQ,eM`)[,+UݮTC((]jC267]+tިl\\8zKmdz˻RnYO ),N)TB.ܴ_{"cL>j埮XaN^fǎQ4[N%ƻ%^*FK#|bFkagx!\}>a)AN-3d {t(7cau̬%$ǃG[px\ <q@_; !ΎO{u2s|{`ouF\C4p쪓>p% dj6Ƽ-DOw#Sr7Vk!˞^ZqJ]AHNj",r i6V3,e} L F|1=@")e@RFC4Sӷe70@o nczD6[vKf1C4 Sf?r#!J;Wn}Ys cE#!tg%hs+#Aiܚ=>{7O{v?{؇}cQG|49370$h!9>4(AIiđ[$Рo1i:uyMYiO'׷!NO;1kcOwqkh1B3zUP(WjfW?6S|2;!@#ڃ(P'JhⷷO?~h)΃=3Ih&P;:\cH["uMuhOUjjo?zUU6*GP h;ݚ}zrorOA.W{GзS,4zV?Yg2z#41&454uMUT5 ;S{Uebo.CL!wYF^+coO)rڱ\Yt\v6 s%ʹlR=ĉUk0rLͥ/錍Y[&4\6lue݁ŵH\yK4[{}X }tCV{GTCo6ƵW$ ol*I龡.474}CMagÞ2ocP&; rF3XHu XYTG'SSsH;㖝 ?Kq3Y?'=kZwu|DP6&SVBjK/z8Pưh9ʸb٣x܌Q2E4jʰYFNE@@ ([+﹧B6}yv Pw9%Dya>t_.C-Mr诶ZJ&Νmzj")uF+9 rV3ςl9&o ![]V8.rAH<Br` ~>nؗAavK:n7 ;Ɵ2W !)gjY$IƐvU5/f1CjL)ad 9  ex4'ór}YBm{WYd~+aEELxo76kE!OV%uC`'<"j#:piF["5u )8$v3DBpd]֥vQJڄZ8 5>@S .C[2#pY^]#^sk"o{e)>RS\F""2؆_lL׮9-aj^s9ߵUz"fa"x&"'`z ) 5v:)%'{#ja>F%f͇2^vgs}EN=26xa +;'cb%}`en'U`ejnwxAU^T9!bk,FTpBTV\\F{Qth(5^RӝjY#JQ]b=ՙN$I!ݝHZQg^Pw>g-GT!ZYw6m[kҁ?!?^bҰ7M>Woj+@]ՊWOO+`pJ~_{[߂b_d𧻓.8*lBDg ޅw&|懿Mӛ߿8nԄÐQʭBlEBb! !afFʔpEopCx$2/=G&X4aaTB 0MTcc+~h9>Ek?֪l^oV|\JP+Q{ۃm;&5kP=L0J+NKj~^FpƩKHG)'־ǿ>mqJꜰ)ZK +,ɮ|v -'fދ_$"-Q[*a.da<(޻_}Py}șEݒ:c0cTlfPXZ_+2ECJ;>W$侻}$\S\"3,r Wդ`}Ys@l\7ޞ}V{s;]É纾kC۠؎^=a2gmm1sj IiNYSUYou&fݺ9NWm?L)ێr{q^giX]NweDJUVZ*kB kVEԪs=EPۿОs`pVȴijt;N_x ('E7>Pث"d!;Y]<k;"mG'{Uvf(( n(تаe(,.+@+W61=P[ Ijk,A0Y 4 9Ae2 X|,Pe UcAE -u1,PE CUg5߂Q•,l9s 7Vc#Bvv:gÞqf (jQY(Z#‡U2TaǮU2Ta\U$WFaBY*UUӣmLwҾp*#=bsoބO#܎f?(/w7?X hOeXEO2yI 44 h\oG}re6pA&%'kH繘bJrC( P HJ1@x NXcHram08ՄۼNB[ h‰-B(=\F ~,|SSP1Z򉈉 O$;Rry|(3J6d,iȀD'jM%!%#DCaI+Av <܉ 3ͪ(Uq^\>k5[}Ė'Hlڴ1喉Fhܮ!4@s106ME{ H3nmgÌ?lޠ YD'ЬN(SK5=jFZtΥÏcuK]Ҹۂ!BNA{tM6_}W25,+\Ok `Eb՛-_ߍ/R?ľc[}x6OZ8NW/<2z[|WkGg wy2^\_ڝ^9,odҐ?FF_s&znN;B[)kӺG4zZ&4OQ:=}fuGubbEp`g->QӺ5!rgL7jvb/KSy^F1NogZYY?h3pdMͫ퉂0ƒRĞڪkbA3_.S.O"OHQ8NZՅVQUL @H:S^6gzl^XEу2feSzGq?ryz|s} ?R\vͦj6ZSCBuS 02Nכ1p 5?=ŧqznOO^gr\u!,d݊g,yiJŇ$?l5XnRFB#N=ih$."(gG7ّu0dGITHQ&Tώڹ K$!a)Av1=mŨ,>Tk EA7;[GT{: @=!T2!V)QT0#X* RU"t\fD @ ¼E`,/ 0Π¨F5{FS֦=6)gd5OcKtnj'JGVae$F5P [ ]n@$ mS?)J(LD s&˚)LD"幋CлV zCU"qқ.a`~8'#!vk["D9S g/. & Ar0ŧQ"qY gHȅzfd!=+LlR0=$n )`y5My%Ɇ ;WKs)yܠKZo8$ m0Í ׮k|da#ٛt6́|Mf6fix\ۦeP7_cSV'6h*(b)>(Eb|U&N>lu"2"vD[gM / ٍxxsߢRID5,I#Zx;H$.'GpХ߬8vK> pWO(Ch۫9D%_-. ;eښVW;Y$"\؁l:V?=NyUf*77Aq_gi_}S"~Gdc3×<`0%e}rm.t?4vm_ujOlpgl+p\;MZ|Џ3x6>ʶc&fBNK^} %:lowFbt]\x̌>>"TxBD۞e]9{iɮ6Sq@G5d߹:ND#SpR|ǐj8==,6%4^9@&,g +%U"M:ke^&0I&Ҝ'S\37C.y_"_IK [YSQZq10UcìGLY((Ǔ@䙖P3T81F@a47 !3D* 5H&({m#[ gjJ= Fhs>IxtF(U,^9'9R <׆|R]Jd*̰iYa@*Wln,sgYGgNrA'e6X2"ikڷ!o}4}\}ϗ[e{U@cQ6 ' g<#-CAt~)ޫG?| ^ZW/f`1meg ȇ?d9mZ,K{~2*p6Ol\~sҼEiEyAE{ γ!B0j2?e>yhMcvCW|ߍ2TRκӖ[yɠAǰ~g{w rAiJ8ZegЦ妷cUvRsgOW uy'} ߏ B[`r2HO]gHw)"b`a>H X߶`1j_4~6Z^;ϧ uR\hVHqwrl/8HRshphMw9Dh94BBv'VFCFcC]H:ak'0D Ik&.سAM`MJ048jU'~@Hx^7m1oy@OZV@|+V!5}sZB{f̉رYΚ `h]`{ ?G[A@o)Gb F)-6X:D+Q9Y4Xڝf$ 3Lbse͹)%JQřuE4s͉J IX BTf$RNQؠ4hEZ8g 6 SR?ݫ*@<c6Y"n׌#$fdnX$dR0gvحM8%n6N^6G3]r9{Xy(H]$9j@$r%?K@@ cѕ͍1Lh7ߩA1\jL?ɿHF]Sbf ہF+f;Ӿu[|Skm |3߀aۤ6dH:ӇwG@aQD<CK ͟b|UORϖKGDl*$Aj,PcM31gJ"n0/;3 r~`+OI$/-nlq'PYUX,֑ ?TD!@-۶1CqV 8f8dmp7 /%ўn;"ÁW@)s~XN$~EDqmb JkũꞳ265[S]dYT@-X d Ǐ%[Gh5ngZ{4uPpRGxpS3C5 Da5;D^ !(`5n sF-MB۰Dh5:[XHVά(Ogh6N]-&M糬śi:yQ !%kVXSQ Z۳Wj.p=qx= f3nEZ@2ːk=ꐀQ| __/p@Њ8yN@@Uw.#Yw UYwtF^gm*BsdlWsl4t}«c;o. 5̍\g3\, 6ʀ|%)Oyt^Bv.m@wL1SkQ,s) @Wgt5Kw H'yQ=_m5τPl?/^#h ɂ?6` NmaӬ& V}'˫giX*~`?7kK!'d付cxKϵq0 .:# ̛` x̰),Sҧe]%iʯ *Xo M2KK( 0 ńY#vI|*_SXs)hBϱ?FxRsDCe{,+%lpy9wyE<"vqgX. ]9!(Ғ`ȥۖHz0\PY_#gB}1N3=D!doh{iϞ}ew5K* .#m$s?dl~SO*6۠rnʹͪ |4PId" T8cNGJJI1˴R&ROO)u|Ǎ&fym@׽Ђa#^~Ut4/[cw6372?_~R߿ݷ_]bnOyU@mhv/ [a+]UwZ^Om onxvn]'y4pɈa 2WBNZ9` {1D@緃%_tBKgK.%ގے{yߌ y]}shuy׿_=x\ "s'~cph"rb$ř6v)j> QEʎ!ci贐@r)V#"1{As T0J$K pC}&"$h5S_ċq,Ix<Q@IK RGwL xX/i3Yܫu85mZQLjgGi?tnZX3fR`˳-+9: :[E? 5z@+Bqj'12>T#݁]d^<& DžI$)%/o?k8Ye}y&.Y}mz&Pcc={wًhgR`og٧(6bu)n*wEw#*UU3^i4ޏ;U\|@J9RYe=#U«['HHٙzqYvjg3#PIn^wV8wi#ˮ7YqaY^۝qAL98; Dr# A($t2DI/O&QԼ(dntZT@ı&$(œBLj4r,i"1*"vtH32S1ҩbvg^h2Z!4 ?!HAuR8A!@4Bk CڢNPUЈ Ԁx~1Z{@le0 ¢kP+8qH:#HDb ~guAzfɲQ6[ibү7Uf臽ai&cqcަ6$N˹~y:pO,)ᔅ?} 78NлO3,W~[?즏M7}Zx><ǣǛ=ٿ򛵖&| pm^x$uS8d~C"UؗR"C gC J ~hWV8`LR 5XM$Z5,uJL;" eFcG"A 22~k@z4Hʳ@ gU@%A_GF$55~ϗ$@j1Zs74pX );-2K/G^R(46E_ XqfӔ@BrQ0TۡZd&i:{* ɱH\x) {f~H _y?9%in᛹Z΅{aSz,zs a(l3](;y=fe|hM6C@P0J wUPh<0褿<J?r.e@fm}&1z߶,ljdR g[U?ٞftC!(L0̀@' ,U.sdFk'M Q|4^_].cd -7 W8(40s+,NvLIsNn9HgaE"Y'< 94䠑CU\zI"tyc-I!; Xᒁ7@b3@:r 5M@iJ%$REcQߧEgG-ɎbSt564ND5d>%pV1^'/aP[ %G,[ OapIxrpe];?;ALz jJꎆ9YF_90pDRC'Bؔ"G !_:!NlRJ뵒(,ZJoٝs@R^Jy y;a1Ck\|.78AEsF)^qx@jv6ⲣ ;UE: 2QlQ d($0X'BJ B6sƭ 0NבvLhoVxvONz[8FQNĊ4f H`Rq=r?}ΘE{ƌyCf{Ef w^C2 mCrʪ<>x +s/Qr4d!i- /yp%cq2 c4rd ٿgo&Qy?ͬ^1l?[׈G_Wx=հdM!2㇏3{P[ۑ?' pQy{QYX<(y/ȼEf2ˆ1om絳i!6Oiq\Z{ &!DӦz2lK𒳓\2x滱Io25fykG~P;_O: @GJ@QáR80+ ECH9JŕjŮӏ,IyRd;3K1غ+N PaG h-1H fĠHC)a>rRM6 ;u\E i[ VY`Ŭcw9osu 2:@ЂQҸK P*/L0`(zbdJ5sTZ}10j3\ ZEDgU(U ތZv4]:ojZ.1aC [66 B,Ws2 DJ@b0GP^Ɖߥ1UV%^`|.( )/j/6Tu?"HJiFF0LVtG+R_t?@|}B[t?0yCv}|b%'$W5wW8y)”pD'|HkGZ,lEoVMx1gwAvhtL-nWj~5%W롯:Z5yY!+12 MmToh4Yڠ]J^;VnQ~ ~|ZDyjgJ۫ti֓]']z]x\ή CׂKq2_{qog'R7~y]bySd)P/aNo{]:n?[/dX"\q&5&@2*2іiv}a\n3:XSBVx:A%_^M'YCi:UjmTLʜ/T=iao&4Z뉲N2Cjd‘ a(A:H2.E]&K_M-[bqpwmbbᠲ"{P* yA)[?)á[|Csy.#+"2DwK|$2 do@R:rК30R `(*ϸs9< D61\ī^ B gq)tr+1Љ pNO4L+ip둃p&5l846m>6mxat<K;-HrLQD$oNR]|lR]\   7o"ܐ0!} {+q9A^4d@c2ǣo"'#dKL f#g0>0F!vjTxPqy}yltuK9bplpspԤ3ќT\#аf0XaY.^R qx+ݳ>Isk^XN!6xW]]2"E ʦ1oQu{{/'z͗eFR+3 T -#/VLO{p˕(8'^)֞l{]C(Q;W$*z]`MNP @kBBH+)tH!.d!2% q4)"Š\ s|?7 @#Չgp"B, ,2- ,h AT4:5 fazdr(8&]L2eKm9ȔO?4ʶ[X|z[m眕4q>ᛐ]qtMڧ؅?ج7& !d+ﻷExiEnmTiN/mXd>uJjHpp}!f̂e0Af>"yRhм4sWVRW3ZKQМ!EJw,_xwK><2,ĻϟHcۢϊ_o _nȮjfc={6Uug/&ֲ[ \8i]?~^R-Oj仿!'aMI߄BE^vF_ȅ;akL0B-ʯ)J}zu1mԂN咺 E嶵8s]BӑB=9x^+)2ʳ΃dcSUϓR yYː ->^{_#{NZC,ixgpeK(?El !vUݵyX<<\wߨgB'nǐ=ťoe.2T5Ў!h Zi?[SOU>o/H2d.w|dYYX) )qu5O HQu-!I $0ZGLO,6c;C;zUO(<;c}u(k5ݤ:Z;hѯ =#<9  ,y"HBz{ Hd4͟x=y*< J y 4c緓@g( ΡLntDf W֎{?576{ ځ?OL!{O5[?2}I1Jl& UzV "dSO1Yd%[~v[(JTUbV 6s1L:ԟ L.؟%)t}dEE d2oVUlf6o#ytC{=r.eӍ9*Қq-R+'Qk 0}:`vlQmVԝ/eSj B_$dQɩ4k ^gV$[;rq,:Vp2.pPw2JM55 Q1[l˼CZ bUx*КI[rd@cG7%&%3Nh-GAj VsE[q@SzdHex\4̣ UDC2BG+k:Hӂ^j$"4,J `謵4 !hf|ɫ.Rk`@ E*+Z%λqh46/Mh/^L*e 8\jP/Em+S}2hP9SUaC`kt~1.gͳ&-aܗr^\x-/iځ : 㰬y#Ǭt#Q`Nsj9IZZSfPk'7;d{V%k#OȆXt鷻T9L A F(mftFyM^l;֮r^ ;%@9&Bpk/5QY8f&E L%ymVHˆ[1P6g$cpty΃/>@eN XGHG:Jt!(d SdZ׈'ȴ$elCۿ4+k[jb7W]ϯ>5"&:-)X׳kL?Z[v39Ҷ[.EQ\%igجD4i(JdzZ<հfwKg(I}ِ<{JfNMUKr|\pv3IG+pr)(DEpd{7ů7ϫmjr>Ň;W_ͪyxry~{1a<^>֜ 8}=ο}_qN^wSFhk/bfz2/&O,Nob9fFR!C. Ly-REK[jN}\l4/6 ,(Uʊ* ჰtnF啰QJȼ& (H6 ?Oˆ4yzԏC;8U_7A;N~r_~)ZzEKIܒЗ^"]m@€|l>ŌOwEݿg!bŁf}|C) [yPm$o p3C/iǛyϰlLoіYxJ1Tp,yȴC=<>Q5>so+C@23gM\")9  k6ڐZ\)Ѝaje@ٟ'}>"LF ^^2}Il0饩 Ӈ c w,cȏ"?&XU^@Yޛ%&JpPB(Fk5pJO7`^L&%Et}~ i]3A"L❓Utq8 _ްi| \ vI5s)4{Up6Qee0O!޿y3'JU$rVɻ LksYil|1Y;>GLfFLT29b׈VXЊ#Ż3%hEӴ(ԃS)dy.kmFEގ4eXf'H6ٗ,Ȗ!ɳ3 -[do3c.VŪb}mc T2{k7xQNM}8q4dMzd^3ylvon7fqsy1Tl'-,0׷,΀ӝKq?{Icd~ A5NX8n1GPt`\:_U•s5J;N:%X@(9{ӉcO>u6]E@k6^ {Հ(>H #PF$/xQ=et2~gb9FP;[p5gm92zLoBT2lqzrf1%R<;oܿ?L2fy%!)궭b|?o1^21?'YjuMƗ96U" vBvtGZ-]LMj~qH=bV ,Ɲ~O^DYdҐ7!:E ~޵n=ѺEuBcLKm[jGքq )E=&pf-.)BPYo֭ y*Z_8)YoXzQ+/:jhxm#FG6Yn.}fl<)nY~ 6H509b'İ Uh+ô46Z(N9Z N\KHVsQKs9=V[f" ՖT3ĺ^6~?-Á&7d_?|鿐h#lR 8rrb-ɓra+}i4|lXs+R Xzo&VܥJGE]̫) d1D([}_JI0],AB!48K_C|,lOz8T*>qy;lz%Ŭ\I!{-_WEV/FD&8M8*G 4VfK /d]^BnA8e݉FKY'~bCT2~uor$x R__\X׷}ϲ %ٜw:`X$(ݿE9/W7R4-^$!1;go%>#0 ۟ThUd$!G*#$%8I"YXT*MDd iġ Moȶk2x"p}(7!$y~I"Hc 1'cl}&4řB\)8y>_*{-ᐍyS}S?' )ݯ΢"޿3w߭M&ndR< !2"+LH&"5hQh2D'9"slÖ(fhL ?(_fE)q!`@0TlJJ d,B, `JqlHJ"۰PEXFbOۍPofؓ76Q˓+cgTքGceVΗi+2J8fϿ&~smZC_Wٚ #2.;AZŏzabZK־* Bm=l¨m Y\}6T3Fm'&FfE ތe'o<#INNqQeqmG1oPb̷.j+gazjѺFGAjFkId xj0Su. ( 3f8BK`ȍ^&O0]k/C=H^\Wz 㗹;sNށžm2U;`{ y*SƽNkʕ=ZPN;|[ޗĢ3^hݚА7A:E=F0CnO|8Z&4䍫hmfƞw =Þwa/c1#NUF] cqN50)ύ O1!s!5)}L(#b6bʳDN$0IiJbɔqH5g)74G#QIMyJd=J/`hs|Z+:}a6j[)~3}٧~a'_n9 .S}]^8v|C䜱Opy`$xg՗ܦ .h|O՗.c/߭byuiO9/NɍǥAWYk|].oǿa>AHkߘD3\lvtDؘՙn3`KeM1`e/LuÇLRsC椻s xtROFS%] P;%Ŷyu#+CN鈃pPrǾӹ,ӵt)JKw=8 I0%~a(nl|@t5h9H%kb!XEB1LLk-!2DDEFB'H4!;^/rS\I8yfM$s%]:-UNyj<<͘'_>j;P x&^2  j(U)/Cy|::QJA(SD p"9o#;C$MN.?l "KkH)(">\ؼ;d3s,TѼ, Wlqvgůݯɖ+DH?q ɣbGAUS%Y-,(e ;!QrlSgfJ^148y 9ɧD_mP.(b,2脛 gsW53L2Mƾ0\׬p& O %$!wߠ};^ba^l#%r)YyRh7ُS}]!$&{7 㥣ZIk$T_Wc懶fnZYa 1+T ƯcQtbI?BܴHH~Ebq~Zj\?َHyDBIԟQ vsAH_!/wx)4Lt̢4Tfc$LG#xS KvT#]+;qr8ѓLo],ޜsLEd{3#Zy xќՕyKqAj#LHr87+ \t LQJ:ÿo{ 𖑻`t^ivmehKޘ'Ԑ8~\3}c㓨ooooʢ^,1t"$ E)`X"3!0Rl3Pl~Q[׫G/S&5ba3UlinfC8JxC\nX+ڢPo8HI)ō`8qc;%%g '4Ͱ$Lb'4c01He' s,A _0'I۝YU#C>n1<ҠIL1hf>2c cF3BDLQ[k EY%+޹u}6^,0GW`W#Ce1^j)g4~Uʸ2:*ň'Z2Fb^q"[*UTb!lrnݦFFxFkv^{fqe M86ίSvybׁ2nxf BTIxȣvY=rq[*nGb7O\sKy1C'-wWh;Ow.d4%Q! 9BBtsABdR3m5&HЕ j>:нͶh aiMeϮ,mx8P֓| 2l8$_ADQr y,T5/ $S )^C/H|ޭk_urf|86>Մ)TsRh|!e u{ŲNlĸC0:xHwjA4gi,jk7<Ⲛcۇs<< HK;x:#-WtG9%e|o-6̾oC53IJ,r iW [NeT*T-.֡Lw5 ̺!^hݚА7A:E=&HG }u!(w JhݚА7us@"^T/ Ng:0/nwmqHWzى"0csv{oZSLRڑ'UM2QFSȥDD"s DݾOˋ_7-QcOÕͱ-T|jRj0?s$ oĦٛ$D>el,/G"=iaeڰ. ل^ $J~ZQ!6aX}g5)fL~ZVZi|O~X}gupN;i'(䧽jF K]5%p Qga9 rpEX䖲6m]θVgņ\sB#e!t!ǾDh /=VCQt̐ݷ-=rn}C-|0'{\DT^ X kP-4T+c\'aT5|Ic.fQE36d1={>4?pR.XeVAJ>JlC%Q KzF23;O'.[)ey'fGƩKaR,α0\fiR @d^`lڌ^X! ugnʫsTeʇ5nd²/kyQnG'.\^uzv?n;%7{\X'9cEpGR}Hfu Ii\)nkcGe!iB/2I/ n!{]7#g>PWs 8݁iWm.q_ٷ?S>_޵uGH]xs0:ZPKoeuom_jlCH@d߽Ps pA`~XtACi%@s؉9G7#c #ԗai)aц&Vn>6m? i2C>\fWUvƎ/+Gmg7ҬuvU]}do~^eW7rŧvYm? )l?Ģue( mšF}̃QpWp'ͫ/"e+4yB ̠ҫ2Ϫ/VTlUΙ1WݬlrIi7Ho"&_)gruwE>_3NkA{Pft6L=O2\P8˄:%p?Qm;@uLX0 ؒ> *8ӹ\zP:(X6#w|gSe~MLD8"ːQRĹDK :˨LJV9g̔bZ:K1r_hCas=\υΡ98/tM>#ね'JiW|\s+pˉR iW_6AZ5ANFvYq3[?w? VYo~WЎ*`}ŷUqaBQ`W VlՈo_7[~xyyq롿 /7Ap5ӯ.ܒ|m4hB {ǏWWUͰ. ;=M9zurι^QX߳*9s_s0}R}O[}o5;K;'SKtOI>dQx7n}>nC:'SXM;'ۧlk>0J~ƥ7E&(F݇ZUrE؉>кȹ3tcus1o#a#FѰ"W.:ϳ<"WmlP= knqE/tMO,s;>QAٴpt9=#it_;Qyz+1)+ 3F}C"32$?Du>h}Cרt_&|ȍBp4躯NY1zqyz:F:x"bICh nvE&7qV%`*/W]U!%؂*OJ'Th#C"2th@j&5e] .36R^c .pF'(5L!yje IŸN5(lQE bo;T*EE5R,3%`֔lcU 3e+eQo AN{VJ-D$'Ȭ //Z,\틌=:ɈPSm^e6,mN.7ABC}HS610B+pR̶\ΰY/ WVdB`+gZRr2%-`QLyne7Ku7ZKַos =>wМJVƺ;B/=ocvuy#*6>#blI~n6hfŠ4$nlȍ˺)0beX*K`~bYt4GggZLv\bs,v~NA^zxX.zR28ZRFָWh}֟?ރS\#Q`lXj$>B=6XP#Z ~Mk)iĚ\:"qIH|T!8{O{v@.p5&RQC [+*+߸kqQ*1 j9"F+j~U(*0=[ wuqEIYk?HOD(eɥ¥I%|y= #+,_ܳ04rK2!_XבZ3,Y3oCv$%1-1XRi(Z6s1h0;S1i^WRw%__"g*˲]p(05ZUPoc\3.ssOL|c b!DgDo|ֽ^jBo'"D-IOFB"j߹bAo#F5^Ҡ=e~q4&F)be Z,K5D"g@:TVRwj}ѷ/}^UyVγ2)]iLfmV2Mk e]VoȩsmbSx=Iy%̘` ,|+>߷Z5dQa;!\Qf7.7Ǻz-]vø}8P9[!ɲj~ ʮ7zZ;2QC"wgW 8ȯ4C̦qm]VHwXEb.+2uRH<U÷_]o[`3~G-|u}䯵\{-VLӷ O~wq|! /.[v6 [}0k7ts*13Իn[i*ʮZǼENwudr8>cKnSv"]Ӽnm ~*xC͖ZʫMyVYcug; \C'!+iWu ' V1F#%`tYɜcPlSS{R%1~I-Kh:WӲhUBWo>Ϩ2iGcJWD7Z AB?P"1+A>c0'GzSyʻ5*2ي,U&_ [e Ao;)/dLsC@^11y O+#|}"xi9VpzU|e9س`62gXf891{-zcjaGٚ!PN@b rvt g<|(miU mME'[HME!çQ6HfG4?]w}yn<4Ѷ7g1T;./u]+лkaz\|hN[w77%Crb瀊4yW` Be]Qf*g_XAA9*,I!oЍ6i|!n,&V C}{1iR\!LE^W̥ɂ6XHDE⎴;ȃ($,d$6=sDhiɶY2KH8U_kPʸYy"FvHgXDkA0dNJw~>2鹜PG&D1d++#_R2ƌ$1::KiCѐA;(tkkn#嗭lmS\'fjg̔6w$RCQ8HJj$А~f|87[FL趶LtڃGFRwu st:|NAĸFN0p^;N7b6cMU[l&| ueqݲ.C. :pQLHmJ+/7#)*k.DU^|5j蒮7J7%8ƭdeO5ut:Ѯ6vx9 ޼eMT."M,SCVhSkP{xtULᏆSP&^T2VW,8>V8IyYF怚# DIGCQo[u:x1'B1'df4ǡKə|\`8?d<<&$Gcn: KQ: 8|wΤwXȮM}~3H9)%DW,rc%!̫8GMU4cؒxga Z*UY9P$Z<e+X]))|GBt񱮝tN$cJ MѮt;8PۮKǮ;D(x+BL(#Jvn'0z:>a&kڹ 9zRlZ7|-w1yxtyn.hQ%jU/W'*èM [VUm\TR"S.M\wb)UD~L &Ŵ~0‘~risU|r˧xv)5+딚};romJ͎ /)5n j_ W`Y ',%>2l8C{DT3%A™9@Ir7r|6$)iz+~oI 95g7DV$f/O-'E3_5uZml)/ 9t&3>3lw X BI1%$<;BǞ( OXB;/Xfdr\iDH/'~F6Tid;9-1'ء3 \t3 3AqVe8dA@-/.bŨb*Ul|Q lд| /7,j`؀3[0NʹF]1)jZ0$*49`k2iL'q䌫iH L[Y(H}(|M7RW$`L<6ȸN9> 9jA q\qH XԕPSUS\0^4 *08RB>%Sa.FJb!9e!UK7S#̉L TASy.al<`#-Zdzyv6bi̳]Ԏ<;%&mn9ѱK@%\ˋ:>P8ea5ULcn^Gpyczqcsl315I 3[z ՀژMp*z~z,hC{SXA}0%G\!)3v/n.ޮ\uA?Wc:Y.uNuTz|a%bN=ЌxhamJώ O)=.Jڮz6<3/L#ߗ4nm<a5h jMÅΘrB8){Ab<->뷏 i-uɍKo9"zK}%,YY7v`;E. 0>vob Ksy/lE:)ϕ ,9`VDR+dެ=&LoƚXoO򹃒JzTe}YdynaW΀"f1o.2 kKc1ЬrU Xo yܺN!hX| a,f7hc|zP^AKIĕ|>jG`/" -TzXyΔmw[PqyQ4 MPnyHs9{aٟ*7gzXS"bn3[ŝqU;oT0OOFQ6Gݴ^cqo^P%c&l\`yO&Wx3d_)ǑMTp,ֱXOEqHr#V*3E,I8·S`0.Uż}R]PyVF#.)'jS:{` Jڊ|c5PȽ&P6~.cH N1 Tpt[H[GORNZuS7.#`NɛޒVbGƙs9(z!7qWvPZ@"U;tpKgp؉Fm;bENeh\񺠒cc&TTjK)hpSq(sE(5cpbB26 RB!"Oftř>19!R T.&oJӴmw@*!%Btͼ;gr`hoIӏ)eAs[% n7IJTԶ~^H+P`Bbi }Y}tN~fX:u͖G_*%' w++͙V'$BTsB3j Mb순3 ,Mvkd7yY_K~UF_zp&.J/\ sĚfQߝ;'7oYX_A^E5΀2SjV+{F}j~l^=D!0x27?Z3+f]R׫ϙ´1y={H0~(wgv."}O BeL ;' ey;Nj-w% jL" ڪ ΁~ y˥?΁됋)V=y{2-Fvw~4h+3zI~k Lq]L|OF.?dS[./(׋U&q"1E"gRT5*%IIKM(pJb#EUɜ6 7of6?nx~9ll@g hgse"hZmsR7.JEvLmJ= V/.풤K'қ-Vͭ\2KPT(ff]iGc@5e( xXCY )(Ǵbfow@\Cbfźs]3_XH0f/ϼCW3.RK.9HNz\,c(EYE/H]֒3DL@kЂZ)^a\˒g\K.#ޱ5k4'0I_F.ESPuExd)!VjőV9P ITzЊ -j`(E8XK8x0K~ =a68kX)l{:D>#6 J NرIe9E?cޣ/zE3*;4jG\XR2KwnߛO >?vBzۍs`~7-(w3OEtL*O?nt1@X1h\ n$䍋bH&oq>nQ 偍 EyPBų݆'6Yj7E#SRq,ѳv(Cn:7s;[olm\Ό1(6|k^W̍՝~ɳj9BB[ cj)MABݤJ u/ԷsouSɯJSjdUy|eZM¤SVy{=-8?Z`A0H&[oO7.I<ޤ֫=C™78ytD(\>HL)j(fPLSZk#CA Q]J2\e&s朴>3CyӤޛ]ǿsm4ow'Ua[2鼻zlBu&"%q,I-/h`bƈb`1` <2><=|^B1 EQk;T,x 6C)"gw?Ja^+Ff%P 9 $r6[{rYy뇍a5^k#!#I2TiHXux60 Uw2An䩆/x(pic p,piC`k\S$_%DZx&q a#6W5 TEM`( (&D]7 @.G@  +>({RYt<̴SsXrW)j:A \( 2WIO*^RZ1d-r^֡l fqW%cEN>mnqhi_Th<̳Bsзk3ƒ iן۽ Ւz,jh qR'-fYduIP0}~t|r0I(׋b ON4U8_׷܇VgJ 3_6rS?=N܇pzUכy6'GM ˲߬WL l^pa5\XkWyRɤ=olkH[ue\)~k tѱ#>xu}avxG|RQ&j7'o{?p%rҎpqZg_8K!w+'zA[䍱I`)<#}QOazG=IMdeApt?$1/tTIB @> {BO"w7ѱ1(=(4rMBK2l0(#^CM|YxAIxf=tf1ݦfeΜTrmCY8Lag:-h I*@AZ(D~bW ]YsF+ ΎA}(BmcbxPB^y힘YEqR{£BV~YYu|f MbH#d:J+PV bQI|mexŖeQ`xܻBgʚpL4^<]89п`Vs;/6MAeif_[\aZȱrܛ],mV&;JE{PVZ7%>ŷ&"-}P |`jZfU^\yoS  UqGci&*m\l:8/;VOjM>Oh 6.ՇM" e}p˟Z"t}tX WDk \ п{ +7v9QTV/^nUL1j~vP咠I"ћOU\-zןMDž2ȁm vjBP3QﲼY[u@-8*ɑ:B{b{24!q t*F_%ý5e`| !R޽#pS|'38 8a 8 UXicc[8IH!C]*N̑o@tUK)Sʽ9V8HtUt3(o<BI*8J&ɕ=oxnvsNػln^*ywk`Pq>Ov6~pWP(bf@%HXkp %5ƔCcoWOf~5WW_9\p~w gc[< A_wA2/kd>&4"/nBB#0M$$Obt?ٜH (M$x+/:q5B[_@as"# nCSEp,ލ>kP=g1b2T])^D`an{vgIDCe)Po`j`&I@zDe"J4g-HHL8`|r;YOr7Gof}rw<)d6k2Lsȅfqcu޹>I\A$sӽzn ömF  PbĞO3$_viA~VEC{,3'3_ti\=@ yNc7 ~~^&[~j0w1  O>G ׃0@N;HIo7Ll~gr9DJjuuodyǎ`O\;l~0d~,%?n(Y꜒7ͮH0͆7g셶GTv AI#@&Ԁ &0ј[^ ]&l26)`ṴC0?ύZe#eHQĚEJCuĬ$ 5CZZ, d<8:*J| O<&"\}qDX}a/jzJP3!L (C#C5 E'H+.iq[\ 5TI7IQN]X]6H;F4J=qJcc{+\ZzEM)H m#E#!}gdw˟!Э<`kht ?%Ԋ\&RO6ƳYGaPF`Ɏ,5 zIrU$w`n{Tjֻ6 ~VS)U(9޾ުڰ*t~)rJ_w)R\#샅{.mQ_΍ 5C9@nT\cgU9BTtª0Lj){jתz bOe'JħKbuK)gTKt9~ybo!U4Ll(6J4o;0*9qh,B, M,DƂ3ͱp#ݛ&^}zO%CbN%av갪ŜF;$gy˗Gg4*ނF嘘F 42ʌLPZ(dT)(G %ƚ(A⤪RՙRj!Tb*TG$!CERЄ2M0)DC)5qu[g-#AeEdmC'Wɚ]u/nL'%Q\ά*WI#:lW=QK8uNXmt%=: Ʃ㗘)Q S`)P%ÈN̞7 ܪ J^ OOr:tn=`yhʟjqi&?KpSs'C"~$C"En*'݃}afO-b14u \AX J3(Y<Q%(Qlxn:JE ҨRU8Qa nBǤՉ,w.l]c|£i:Hs/ëlbMpb0t6?A3i8^Ʃ$lVv/lƤ.~0o%5ւ$f5YvTA^/;<صAǏB1J>K؎n Of;JmzW|e#%tnr#6gp) /!o:<{S]&/菼A.ꁃ_%+k]JSauxh?t~* }?\QD l}pucP7]bbe劾x@rP>O AX͐JyC>i .IDZ@M#/X蚁.)jqώe,ҥ5 N4"&as q/gYݺs74 >iUe׵b2AhKg۾s}l]޶-Dcdl0hMca(MF$B$ZljV!JYG1Ez'iXE׃ms~6};{[Ǖ_^ib+P߅ F{{З(Tp, u’(GšLL(mB~] *[A|újtK`X5Cu³ZaDuG.NӾRr|]\zvJ$ h$QIꅧ)FXk0i_5LpGw=X1T\rRlRi:-.Ai' 5CH-lo}›rJeTJ/( D$&aLuYw`kad R@f4XPO,hFNT'(E~n!emՍ2Obh8W+>n[f &<zٻHntN|TExu!ذ~.ZIÆsbHyIdwOOKb4*VzkFnOOg⼙.)|B͂#z/IJž H?)W?H ѕiqԝ+}*]MJdAE6<"FVN*IHulbT:64>H>lRiy Sa66 )ԨQ'Ftj۴1ըK$zUos*q0JJk3A֢z0\^H9x A-EZ5j1OXH|  SjSՄ!_v]k+z1g5JUqj{1,T|nO8"+j]Y/]  蒼ZRĠ'"0BxWTJ-9}$w1Oꅛ\,)?Aa O AȳGrc7!VAO7Kx ii%_c>o2Bt(]73YD<)rь$|מco I3WIJܝ4 ݹqKs^fmτ-k>Ɨ ,겄+ C3*0XJBLp0F)/L`D2^սr$Aan R 1A[3>]q>4IƙsFjf/Vs"mw=]J\^\,eLGP0CkrːRc{^l+ a+/_"䳸K>䳸K>kE@ Jc0,JoʺP!! ;P"}_Jf daݔ(LE24Q~΢p(W^XW ΋ڳ&]a+TBYF[2)(Q`w^w΢,F,(/Fx)"nǯɹI9]qlɶ[Jsuu@VU^Id~B[Ӆ--Zh٩۞s ݈v,IIؽe^bDub^02Oy&|`F;9~>Kj9/.ZP_}no[!&ig26($4jAKjd` M:v~=HO$4ƆPV06I˃z}gYc:64Fꀺ\#S+YAtBT圯Xuc03FM"9OWvό$k߀g@hϐªXUUWB(eт du%K+ڄ`me`ڻ=*îY30tFJlH-4[rv^C0j6Itkt:Z C0$N, ZRƆ/ɱ1z\w5i-Etst=vuH%h~~ѝĨ-cvVng!ӁZB4k%7RjGyN%H=)z2{ |knM6d9M:$1F{ޤg+ K;: JIX<&E+X흐ewu ƸֻJVʪTyuGs`4J [sFl*a╼FBQA%6YRM92veubs. 6ogw7E0<ꇦNX<}TyW`[ .W}R3Rd ة:1i0',y!o|؏ rTJ(-}Syw{yb˄F iGݟϼUUԬfXC$wyɯO2)>ۤ֊ظo_Zܑz2Q% ߆FP#y-uA2xUɟ+**-} R[&P0D r˔Jܾz˔%x|4%Vr寠f䈞޷( #zRhyANW Y Ul-xaoui.T) iȣю%Fk3ۆjֳn?ٱԔᏙX Cv^wx5$Ó7RuҬ:.dو!Ӂ3M-9^Cf],d.N~Ո9ЖpaOAg(BI 6Q94Ӭ0“>;Rn>뼩ҍ淓Fp_Ӂys7o,)uq(y^(yT呼Oy,n n_Ow Q`[%8l@Ta VsHxpHe/TH!\No :xzO|.1+Æ3z=RalꌺeZ~`~cm8 -.i H{*9EvU'KY6| կQWs|[v{RW] ) /{QxKꍄʻIDO6^BxcCR1z-6 FUH)o z< oáebf}AWLEɫҷAwF 7n7ŵ處J;6ǵH'6 fT [ Dtk %[܌ \ -B8B}!-1/x.sx7("g4=h],g-%Pr+z%[/)? [+|tj~ռkc)pshq/?єϺ5^OGyuhx;v[wGO1_>=j>;Ɯ8m>bn2Il͉񑲳:X~̕bs#\E]k?+ADI;^2*=UGq]VK=jN1r؈Fn+%j!$䝋h/jLգ_n1s ڭ/)w]u6kֿ{vBB޹Tbf4Xٴz+sTRCWW+GJ Vl~u;繉J _Dw}tusss<]?G*NUǨY5H3Ed@8PhEM3Jibp'FhQQضSuUJ)Q<`ȸVX!@otB;3|&Q*QW3ay!Qj+g"RFN/k'Eg|ZrjI6@4Z>ŵ!I`ocʐuSJJyCՒxy'JYErF$JerK**D$9Qj\LR5VLnLH㟊9j/&(#%cBTWs5xܨE؊ᓹ?m>{;7fCĦgbrKy6IRRMo[Pj) At".4HBE( i0* Ɩ=1]ow eU0xg,+ x{, ե0/k6K!y*T3,J`5^q;JVNx߬HVJTE{_UL‹2~GЪo!8i.-8 <]AM"/)1P} ۟V<5z0CbELNRk7 j ڭ/)w?+Ԍ{m y"K:8״LwAa9F]4^zl&nH;҉2c:95pzs)o'+KVpB_tj%V_!; O?!ke-E>RQ Zo9`Jxi?-?+mIvZoW3 fW.W*mZ}>]Uo_W7J4 &C*&sʃ9LT:4dFD{s`4ZZ^b/A8V =B߻+j߼&tlDl )S}u6¶L^{gArG`[fѭ#豂3Te4ZL)i*E06%M|4ZT&]/MNyS^rk[['3V8ӑTBz:`^ %=HKM3gm{פ%b֧{W vny:4³f{ֱdoө>]Z ! Ę mY٣8TDTAF*H C}yY2zs ,4 =O5:.1\Z#i,$jZ9ve9ؔnDZQ}F1p6eb;\sirP<էT;:$db \VS >]aCZ}ӤT2 &"YTƖT=\ݵo,:-V6V}jeҤ*R BӮ U kqFE&G Yk?mp ld<XkI4N&C4](u7t8M+Vź8*ցl%&O܀ 55na"E IOHYM?6S5#j+X(Vc8fzn%BA͹ Y?gs5ᅇYoHa0^"$k3 ;w߱c\&eI.v W?]3nW7쏗U(S(}vldeyݏš~D:CMe4GS{vE*(mD"oz|5$틿_FMxlrYG0:ءRh43U* \7,3Sw >_"Ớ*^|pvoomJ[<ЧJwg8W.Wbw;iJݏ$3~ LMDޟy >ɅpkpsZ*"C? :؆%J,QZP=ԓNT pKw$:,shv,«j@86O|%QqÃa6diٕb*B+>ΫX?KW29nq78 S&Oax"_}M\W}2z>ebxe5)tt%>`~R ,9tBM;)H)R-ޒ3ajE$,tP OP&[!Uf*T)OVXTTREUEhʴ2B;,*~RZst_P==))9RnV*:V}TT?;uuxJ)xsRfnV!Ig^Z 3/-;wyJid+:S=,St7hauypTsE\'?)3Zw/)UbyQW5AWOX։9P rsde!5oHUEֿ5W(f9o-Kq w"greK<ꠠS(Y&R Jki ڿ5#i,4Y (3J%147QpIT4\o--i:iyC?[ǗE|nSVx\gʥhVwy/;×l%<n#eH#?{VrehJK~=uMy(mJrƸ w+=ࣉwLn+~ff`a]sӑw:-ӱAv4<+W[(='OGhAt!E>PWQjkwR/29w.uti߼6F-؏"w ~*wjcbc[~mm4mu[x' >b5iz;`÷s&;r8 ttMBpt*i=~`KϞ|GX*@R񄕊mZ|*ֲSӅN,7OGnӡqҊUHOXM0I 8c5[\]lY]u<3K6ȥqrIC\ Q$8!p錱G5ş`Q 5B4%^(cDuӎF$+惍cArRe&+5xJ@:>,cȷ.W )Q&oiQa<*Bnm- Cɗ݉Y(O6>oR Eᔞ?Oitj(yʆYFc꣹%&<#+{sT%DMLV]ߵ ֻnbt;u\)$/YYu_9ۂƜ0M~94G=:wU϶$kjwn4]O~otpznһ]J33yyy#:Vb ]q45hcf7;L[*N MоM4һ䫙'm_LB6N *._#c*͢o*KÊ yz[տ({pؾv: ޯ`=<~)8Mv,#HL@ÿ}ϋ!!5iAD6U>dW=TNjU$ <L0kCXQ3b/Gd)7^*?{DnBZHZ{an(DC@u`jM)n C ao+ ο$=Z` 0nl{.Dw  Yuj|prf8dh/dY lNQ ؐ8;Op n%۹v`8평՞3ҫ^Uax4E=\B})C}:I5#jxJQ DƳT$28QXD d!`=g.E9=!pJ9#SOT[$i"FPŹ! "+TJ#U4R L *LE.߹}vѸrU x/UN+sQ9nq70P@*0~uVԞ޵\C6O_Բ?OwWl4_,|GmX$(G{7?2@ohƮzhǺBx W5NXnbs2ezy'ٻuPZ^qU=eu4XyΞrr]]=N; TDaj ɄT5/Ugs彝 瓪TT[D: *U~3?eRyM5-&uRBI SZ!U*BQ'U! TtJR BI6A[EOg$AgJ"xj V$ 15O)hb;$0HԾ sedHwUZ{ %47G?o{x;Vje9GNgs@Η*QjG@~*g$}Sҷ,MYJ~7ovqpm3ke-Gfq 3-hPa&&֢nݕZy1Pkڂ;}v2 kNy!{siXw G8/(H 4YhyOȀ0 #Ei@'ѓos=r]e?=h.5oh%Hzҽu̎㗢E52N2Z! !U\HTcVra󙪳6>6$r5-U^I~wŌQξU̒ޡuaERJyH21ԾKy0NXu,HJ fTa&/Z-Yo%D":I%#ܤTH[E1Rci5dj֠ ri8S#X%.Zpd;,QT#+XݷQkUo&K^yWCb8ӫyU %w>O14Jž Ke@gVBBcX": #hx"DAnQQ@B'qHu؜Wns~ (7Kd!|憼䵨,[XTgD)ˬO4rc"NIQm7DLsf$H"M@Uꆒb,AĵٞY0,}PnK ! HE=e@JlLOJ gHTj[ʒ%Ȃy{23Zeh! G@R9~?{ܸq /OLiUg\goyY 1E2$T(x)˶${f{{{ɫ:-y7Tw8;b~ߒk1}Z.}ݨ;s|cFk{^ ,յ ;FZ_NLG5Z~ذo??f֥, &[w:e,O,Nk3ȣ,1aVmGOiifq>|l1xlx:華./+s~ I'ʃ8L%rGom4OROD1h=>rHiW 8&T@WNoF3j2 L{~P-utTnw4޻|pe0*{?9\YA[!›Ve_ݺ׋MϨԫQu~+k.Wk]Fb&1U%9;{4ݘC@|}3+wJ$dWQaqs.3{n?93XY)|1"aRWN?8\j@ 3v 5*l.DmE;_nߚ1 _ZD8v/s9K&C v_gA:?jC$O\aN'a< DM_ٟnx>L lSچh*{W?ޣ>Е<Y'ft6oŎdw} u!ߗ!N E|?|wTVwb&qV@~} VT>֭Eb(ajںUqunu0S(#@ǺI\:'֭*.ԩz}[:NKIM[y֭qp1 ΅f&w {*, Mh3>ٮv-. ўIQ0]~ At;_.[^$oR.bqVmvUjgA]V*w'x*XKsaf%nE&pO÷*Q{{iwJrې/2ӷP!}±PtR0ތϽK(A n,ۯ3N*]d{yZGiQ o-@v,-pzqkTG8G|6nur~m^4`dcNK=䒵W uap*^иM 54NVBbR F׽F7juZWViVK$}Rۨnyĉ^ios/՝l{tOmﴮm´]G^erL{IVZR6>T;e֕+}Y>ߧxo^Bwxq4^/I7h_/73B˼#eepXm%%*GF2uvF" !65xA4wd,\]^b"/m+zA8 x"OWAt>y91)h<^W[‹}w?Tj]us{W fBM5KB DY?3i_ iPFaڑbfaV^)B^!{{MJR߮H-+mvfy3*pcU)/Xr1fsnUY99=A&\9皝 J777[ڀ-ثVkOxisXR-Ŋ'p~)xƃd Ѐ8I_0 壖kÒEˁ$xG`J8 G{jQceJJ)7'iO)?Y3V<^NQEyթKu6|Oϫ(L `jxK xX lVJ'1=B5G/QT׳޾.sTe<#d/>;HE'qd K{"6cbن6v?}akުd!87-6tVM4{ܝn/D 9Y^_95Dc$TNR "]$01-ki u0a|mEt>x_!+2#M]Yַ.S dڏ9}넧},Rѓ~>Seيc)5jp{P,~^+$5:(@q rŰd4@b]mmx6hŸ;>I6Ӿ<y*xi5wP3_*JE?qOJ+ס}( W(JW.Kp zrڃ<1_k< /l!|<1xs[>bSQD{t0|nuC8黭/od6p|hl^|j* 'x?C0rΙeehn[D5L`(g!IQ?y)N*yY2u Ůق2ؒahOݲߺ4':*XӦe_2ٹ|(]2:"58k W`3ڔqv /Iw7/)D9m|wY5kWAw)4e>E/cqVg;rogµT৐ͺX#Ձ._'Mg* + 黼S\YBW /v6W::qo]J8n<'e.};ˎE '[\ESk^]vzsdŷKE߅=ޙY[șh|K2_{VB}ny 8蜔K1w%lEoMJ5=$͈ RcmE҇Yy-aqm,;Hb > NaȎކ Bn. Lhh3Uw]"G+Hg ?gIdx}Nn{`g*yS]xYf[ ,?>x`?xk^48,&w۽B\cCzY.P" 'saIuԬs țȳC g\'JC.u9 ]HLksn^ͪ7@%]9v& nW/ shØw0Ąc3䷞~ 0&bi5`hQ@8$+>LT]@l\,a02߂4(**YTa+ҝ*B\aƉ/)ogIUa/og_XU{Kn7 0U.im К-!d֓-fPڒx Qf|AtYl24Y킏x< >}^0ϧrf%=%Sj셪+ )K;(brпB0f}`QljBA_ڛZ/fAlGf8m?5ٺ;[pgu "!`kM $X(2`!<Ƙ$4 R""%4}9W?Wt NK? !Uot|l> b9$0C3M}0H*`2D FMT/C<>>Νu;ѝu;weSoPM5sqH0A2k$",$fR&1G2g1yDHrB0zJz;_2]-pnn{?"=K;X-kM*hgԼ.؞K\J\!RMi(Ltbpj8JD1A3aR*Mj(VAs)tkyqhcfHEv4DEL,:ѱ0R2qD0&YPjȉbxvNe&'Ҙ0{ O0#$4ViU*&Xi^KLR`KR$$U U4N( i M2 v4ק!FaK^=Є8Qȥք X(f1RD !{D㔀 `$52B1 St{»xk~)tQʺC?p fc C *5IdH J1@<-nff?eSc1B 9W ]|#O_& |?|Kno>bŐf|wSfMADWӯz0β!Lj}ǮiDzOLbqP1 gٟ~xd#txA ޑ_elxĶ ZƓJ"\bzTkռw8(̢Y 4[}_7 QAr#bB\76@w:r@\ {M5dxh,} " !҂ F ޛkOT(<5xvi$TnbH\8Ө(uL pOiСhX49f}Fs3i"{S ݵ\6_wg:&qd{ ,ٲ7ݛp?`eiA# $w J%~D7(P"dOF!+aV[@(@`nY^REbN/Wf1ܩT俘W<1-8,JHp-!Nq"iDd )Y^a)o) N\.x sF/fkf&g{^s/x}iJi8oij-4 SkzqgWU>{Ix@Gi2P|GR 2 >^q՘\\xf'V1aSmsLVmvEjwjUpt<4Ir3.`B7O[ωM@QoYDnU(w2AW,rӫ]\'+nu4`<&@Y6D^ _)e?d,/L=q80"ϬȉLGȴű&z@[8/|t{LmmS͒P2J !E MZO(6"Ò;׽#/ìxIA36TSEIޓqdW}EH} d twUP<"CSrTU wbIlU@a}.W [.=>JF.Kq=e5 T=%,R BmęNkD_6E-:i Ye:_ $⼖JSg ~jSdP (Y"N,"bYE,o0BQ-%82Yr2e) PSK xTy4ORժTs$UT\>K:ujif60*͈# :*7HURtTԯB*o3MҗH^ꭓ>Hi P3`ە*¤ rL]JVaRfHc=HUREtTo"u)UDI!r,+U,~H j"T*Ua[4e; Y*=t4i4HeP\Q6lK,m\RtBNx c;5pBt?ll-cM2$9r@D"vn@\ AU1;1M;;+/|QXlRhM?Ta=b~7~_87{e"9|LՃݎ?VCC?xgo^/<|wݜ|+%º'>pBGHƌ#2&"cirFntg(fwq;^='5ü=N߰}.z`ٯsbz`Slݦ:xpK>QI3W=qZA@ o4]]&f׌D OĈ3eL* ;"!E4WS\)$&RqD@#,wJ$G|w[ay*iDhTYd)5YDTBt찐ڰ kا޹..cZ~jhPejB,Te-Dx;B sp3NM+/lQRL~넶xZp4-u(ZXnsUà[g&WhM $,\3T 7z1hZv:>t׷Q]-B_pcAG3#y)a ١mpP_#{\Eޏ6˾5~F߰/i2'}wGa=/S9=7+p`%HӜ;[eZgO-f<kYV(u6;7T ( ( ̐x"ڻkKdlVʻxHMW4j2%:%Ӫ1_.^k[{ba5{7n'"o~WY;_rYR$/,|G -rP\]u l/*%P#7Нv_m".%o\c/z?a4_<|I#3]lAEHhEUC8"$%c(~ ۋ7_ś4~~O7!Ox~wQ U *=}"a@c8bR 18ziIߍXw1x1X?6yx/'ws@Ffx_ V-qK"ɯB}ra?govʳDO-Xrz{Qp2w9-P{-_lI:ZPpcFA9&`~ ry8me3us{X}ƀ&vci3SZ8xbRڎsݴ/#W.샍K E3LȒ5O0;^8? o}zb泴"8olcD7ď@F5R7zʇsJ{xݳ0( mzbKx"1#B1H&dB0ҘwExs џ7)ZS/pYB۵]Q*ic1YΦӏqg_'ѼyEƋM^4H!c"Ý_C,1Hr՚(DPhغH') .,ifƛ/eb |aӷ asbLM嗟-LQiydp`|])QVB ֦88p_ hˋBRX")Ck0[Eqq¥V8;H#HHD)*SJ4EpoP@FAA+D <`[ `XR (;)}g0P:e/PבHN,BP!c#F-+E4٦Za+p(11Y(CFiGپKƐcBF" GIU°ք+VZF(XfqFcԦI")E1p߈͆,?H }n 9E1 :T:ILQ I0h$c*\G@Sgb ]Mb c/R!UYQ0'\k밍2k((6 p<ʼn*&!bѢ%&n82!-(Ȋ8ѐ2+0~vEgZ0eMɉƒ&}M i4!B ԡC[-=pspK&H=a6 1H- XWP 0GLPu?]6>]g5Ւp[V#7He "3J"4A\0%<$ DcL20FԄvw~vN^ǧy:Mbl4Pǎ5#'cT2Ǖ.Wo3/:b:/f~H|KΧ4N+mlri0c /I"%2踓 | + @+%%pJy}  Wqxw yy‘;h`ԔJyb6qPA x̂N@˸TN(uze`*@#BN`b~7ZgG> fBZZ<$51i?sR.9?L6>寐ٺ6?WwP(x5Tc;unv~.rO%.>:Ԗzh}?8d"ؽtV[ʱ2+$!߹V)ah7[[UNwn-}jZU nuHw.Ud fLqk7FUAaR[QbBUޟ !߹-S\ h aS !O8`pNҽ-5z kMR8n'.BvX)pDvbLq!02VAH4 &z^FĕiМ z*!ʈo* 0&ͻ%D?tC9'gj$F;\"]%V\Fmuo+. ylÍ 'Y#e|*m9w'd7p44,X!"(-bA8[``|$LPtp!)00߀mna5-F51^z%NTIP8/wy(SrXD`ͳ9(x4- 'O"u/n;?_&;} o=F=y1=_l㽛&kyx?_,gSp~Ǐ x&OU$ 3M  yyQOnB:xt__?t.>"p͕  0**)DKzZ!=Xp3 2rņl=3&A]3AT˽5"fBcb*pfezjEl/:VE)`"%o)Ĭ=M+ЋWfb: yP+K/]qi P3a0/E|; uj¥J2#+-׻2[ kDVV)Iw+Ic0ڜce/N].߇mR@{JuD{oݶla>(Z.w΄ΐg fӪ*LafR,sUbIx"KGK8%Y%*KZFKrŐXRR ԩx5K1<7&SABaϊyPpf˘Y[\9-lMn# G*R5t|pʋ1HXJJ?.]L7.3wlU#kJ9%1R#YcDZt ~q%tm y|=UIBngh9Ņgw|5|Z3J2XĜ[h_&CV:a`u!?5j{arbPXsEȦ2hgt?Zq$BvE[p: ApH|,y鱹K+ m0p҅% k@TlRKAvVc\>WXj@)9EaN OXjWifEVx5"\f&M9KE!ҩ 7ҡM%?bQ=K孟{BÏHVYh**Ӽ]ۦl22Lm}(rc+p,KF9gy4X<r!aԜXL,Qa@r!ET`j)y 2m PxB:3 ^čZ:)CiST7СYVGL;nԅLmH w\lǽLi̓J-cr Ak3Phך֎4#nr+Uv؈RjZ"p5Wo0iSo χy'^z% սVrbk,SV~A3n_?芹?ݺwqV,jAVzWUuιNg1\6VN+nhJЂc ݰ}{(W.Bme|ܖy=}۽)3UKj*R&[^S77J}bO#,j>ڍjHs~xDb{Nƥ{KbѶZVO٤H~G&'s ^ qIC{nr ٭(iutLi+٭y!4M0UM@t5|n'd:@waS"c5vk~n@WMLINÔjre:Q*9݂Ԋ)%г. Rղu}S\{5I|0[??AذY7Oo.~OWqz꫐"]{{8x'7^z%b\F'7o7VuH/qI- zwugzkxYH@< vJCQ%)H EzYіԊs{tƾIoM6Y{YіA~5..bqn%UAj.e7 Tx'V/K=ږZ빯]I t&J=I羳>IDfJw$Զrj6O:"\\MnptDTsYs243:}P(u=}@̤@̤H;}@̤R;z;cĬ,{JbVz-5h9s "Jc 8ԣ-\E-9UމR:}᪯qiU" $tU"J>;o] (E KD-T\' UҔ' U>;oSDdBղܧ:J](Ǎql>o 44)u'X?&\ߍMS*v VZ.xԚR!|YЙ>`sK>Yi:"^j X=s:"Jc 7;"JFkv}d4TX$IFCgx}g}RQ,3>)(-kޖOqTџ:?sSGYϴ):V3d}r(@Ž(э%G[R+&Y9gb>+Y'b~)50-n]]i-B}kV f>Zm32v}mp zTQ^Zu'O=q>Zm3tbͰmf>hq͋IECU=}}ˬX*Ve[ ۟mѣ*[fe'O=eןi;o6CY%5ƚ4Qj=4 &9 Nb]#Of>tO5/ .]XE2K\6`Y:g*tvh4gϚT"Ow}52onBC($&*:^'qQ-jZR7V6]Ѷ@CAќfboPOrź eX eGR;V Czw @+u ^_#gEO(oh|djG/7A|6&|78{ImѼI0n895{[cHe4ѽ qf0>5zg$wwS(e;cr</[}EUWeA0KSe?O3QdrRkVvX(q=q X?w4Og/Y2l<KW L?ߑ]!axf(l{_Fm;-q?NGf,gtE,x6U_O!dZ&#.1)li4IV'c #$0ƽŠMOSzw # uUkLJ^ TyaSi\td }u`K4c*).-pVԫ"T *pmR-@fIu+ h5QUrxt΄Ƴ+_(\Ļ<5;\d!q'ZD-ʡJUډ@{Tm(TF*͓"7*O-Td&9'S2NU )K(UVj7s+l&d捔Ly s ifUN i9H̞hZ!8plC5GL?/#|y'cFIK'@RKvO訲JrN6Z*:+T!HE|їl& rh$8aUYQc&/=NN/쎘5~zk~,4>ԧV,7i,$cNa _ɠ=?ot#q6𴋿c#1z}E9 `LW mͅ`B_2qYB_wB:/k]M&H̻~F -cs?=i`@n5Y>JHx} A#wb0[t > ߦpblr5 T7ޫwKCM. 'fxb(7}UKv˸:^sMgjy[ ?XXbzgrS10ףFɁrbˡmC,ok9|@5㽽}Z3 ]&fꪱo.6Fl)jSʖ%k LA·{醴+m CjZT{NvP m=[ ě/yHd,Qz=ptկNVgF|AdO6Kc[rn݊xɢZ9Ekoc&7Eѐc&`ßv98<Br6ԾZ Rl'd:@wahS]kּn@WF|vtح(iutK(V5vk~؍n@W-L9"B{ wKRvR=d"L%`bPy2Bpb:4ak]f L+qHyt-wz̺3Wf8r(V8q% Mmj \< pL[J\*V9YEjdA8:' |% sdfq}-e2v\;;x'J-@=EVDž^z%bjU)Ṉ* @KR wAhUxp&<.eG[R+$X٣UT EC]"ANJ(U4]TIJ{n\%(5.VzT*ehR&rU'rO/K=ڒZUG%9nc#̾E%)VLkݣUT 1m<#p⊆**Q)Qr&di<>^7m͂`y(K ׃ַ5_lZl7%r92_[UyFd'gˇ?Qʅ/Vwa9rO7 ,7&z6Fo㿆Л}km ;߼',Ih4[D[ruFѨ4mviTؕ HO*ihR;m߀739RP z|4 h{4 FPBkF5#|rT,hKjŪnzT*ڢݯY9ӜsM%K4[k#^_ Iի,+MqǞV8(cU(/jY=M#WKsnkQd38Js~!?duHvr/ LR$͇mlj%la1FvWa3YXoߪ;,lrÙ'zȊ}I1سwaD,5y5xdNr_vM++Nᅡ4Wt!MH4O<ݫb^uU }]RAvF,J@wfiLe\'n";)žΟP{5Kk+InsQ\<۝>%VJm$QHw6Da+Jrg6UtvS Pl7=ٌ0nYe5t1Jw P2ZUrSBR @\(LPT,YA&g*2N0y @qil 3FʓPTDDI*Ӆ4Lq`B,$1EpIub΋LŨѕ7uM~4X soկ~|_.J7飹jyqnJ$jb8TE t4X* 0$TH#Rfi…J33YdZ(+sfs h@W4YvC1 #h% )5Ɋ,i#P!Z,v(r4O 1jJ&UbgW1CZVTVl'jTS^~Wg3Q>~?R~WW?,L/x[6&Y!lPY/P^r2Jv dVlr/Tއ3Rseә9KvnnvY.۔YUmO#R &%ջ륮7(^}z]+ %~RS2)B+g f#xb< !V_LBT96Pj4%P(hB V(j&&=/7x(&g F] 6AcZ4~Kw BҭEB#d^3NH9ɟ74BBU- 5Bnnf[aRj`vѐ-ZRϷu *Ah ~&x2˚.Ϗt|n؝|O NvvvwN7=cV%R{,Xp*Xc"`$r_Q;nO4W Ũ^BBD:U+ m'lߋEl&Y\fw%aid+<όf=W^77~l{F| IbZS<{-έ[C}Jthx@k{Qu/0_/,+R9D+exJRR=][sl`wa9C#P/A|޹y@ mtk@oU C&^lXŽvR TѠRK%`5@^+.i8 '~_Wc>3_>L`W>J0LD#*AuIKRdW(Ϸ٫lU;.|Pɺ7q%4u=q_3R2]mlxgWMߖ.euX8Z',+݀[s_f^lޜMf'>nEY͔Ȫ[3,l 9:V=2}1ˊ)W+ZK]4§d l VtXIzn:qn%yΦkLGnWX<[]4ʧ|TX-V>!ѭT΢[^ֆq)9C]NsW&qp7]Zݝd .Yd2}wҫn{^!zF Gh0ɲr^:]՞I*(m*m%yA;v<&}p@TO>3I!l:j<U7J;Ew;tٵv,hbxv@SPWJ@X&ra4!)SHròCwy$k=UX|2ў8yCɁw2Ύ; i u#}sgZo[\A%XG*E%];F U}޼cgS8D[J@Lg-FUB\HKw^I`|kk\7/<1vŞve|ɍu;76|5Bxh}Lnwg7Kʿ.wQ&gGϕ 4er5Yfw'΍rOG{W-hz[L% YKo]aO-ݩCy>GIħ dz S-0BN3#$:G浰?g-sօiZ>L3$S լ@L)64BkC2YP:) P*) T>RH{U$/ŽH6VF(Bw}2 j(5#v2KW1 hù^)4HtIF̚K tfڨ4 CcFYr%yn?B H{1.LhVq,XDq걬uD1\!W0s)˨P9\#댦0 iǹڥAv6 &Ceb345XTچ3vVU[`Ga;3J9{B P a}TﮗJ&P?g3vVʙvטCl8Pۋkh0 U1hon.*+M,o0Bev"uIwI$]'o+OyՌP8 \&MSL")f6م12O]SwhFAf"N6e$'n>wCxjwsYk).xӇRuT!ƹj&`8覷p[T;(4orp"82c\l6P? 5;II#삘 B&.YPRl23khRôL"`n4̸4TR*1vjb"/TbB(ؑz-%/ݽTQ:$uU-DZ3FJIB"as%1ҕ?^<@fHI14}jPXCKtgy~OOK(kS+s$`ry3_?m% WE΅̔REBHIDYHMX`B=a[w-tBEbfo2= P z Pn:%ѠC}f<1h<3e2b`iF"y h7D(vFf@4 )È8A!Lj0#vkRKBKV|(b#%eڴzl}'KV]4ʧD `-V>!ѭ}i-gtkC޸:VtՆW}7G7;"$eGo܂q' Q~pw| {N' ETa }#h%nx.p^Qmqztx'02!!zB=QP@>//)J)2/Ϳ` N]`|CF]?ΐ=4k&{&FM+ QZ PͮEwH0m-;X Twm;n1ZچX }Hxӻ!^%>~[Cņ#s'lP#;ݲau~J?wmAESw!*TzßQFI{Ze)HXˍtH5۝GRWْVڳ莴dOGSY}*D^ }#kԚ#aЏe@^g'(Z35PP| Ц[@A;OPP]4ʧ$Jݤ!DXNwDw_J+1gtkC޸F搊!ub#$h]v:<[]O!*f3t=ta'ʲ5([z^"cٺò5;ϫ(sq ^uHyMjNwUyshk#d`8BڏiPՓ5| 1$be&kP#?*6X5mz{cxzU)FMwұ go3z^b}>~79U I=I =ƾ}~]sn b!u51u<eG yXyzUY'R `E=,|KjǣS^Qr?/[ KI=I gc{^^>:;WNWh&IU Wz^wB^wUe0}L{zU);{漼yPRH/X=q/̡9+ b9t) _BwBg6v_%r^&l3*Vd  g\Qspϕu^2Za[Rk5tEޕW2Ԯ$cI>F' b~^V:1u|E }~XdEb}'s߷B=}xVRk1.ƾDQ:UO;ǥrY" DKKcmKUrϏh-X0ƾ}=T7$Z}=IMs.c瞛QIhN )re(5:;RB0*:gÛ \MV+jh?#K_;SX㙄~!ϫ{;6VO_]}g3WvbmgbP&Rp(ޅ ,-jxu' l3I5]lyEuvQe E~y݆0jOp? {&/z#kiN  lNAdȽv|杶!ZPr$3>Q_ldl:k"٫ݣ[Wn2^RF*jTq .餡\"^RQ4PtLspW@íNn7ah2n0Y?->RgN!=*w3)o~9ƱB߾~ح &n|_d298m3w^{;[V|M| .$ FEdd6#1>#ƺvuWËz~1D/\.Bpd80n :qL!QM1A+} Z)mb#w}AwªzTpI޲UƔcr(qbXRJ $&X5?Ƅ%k.bB<&LqN3T.cu\ M*Z!wֱqE7X*ƬPs},=}t&jڮcb4U >u}R,ynǵ-,ZT5qLa *

OeEH;Gft ?ą3B\nx_-䅝 U?B#Zڃ;I]En؝8b&رu {1FFa Uk4xӫ_2pQ5m1&hpNn]hF9֓ݘ%4u- hFRʐZ `gU8ZaQCl;`9@beo;VAule>Q(}^$¦8*PSuFjmS-cSͩpD}r t`vTK9# YRkqݔV=!61q:tp7SնP~>gqV5] |&f۟4p?]9ST > /˧lR/ =ìQM}:: pT"[} ʃ/}c \&[As?5QԐW\N[22+acdLV&HP%c >$7%y' aQMRw0:znGw%VpU9)VբUKqΪFeoJTۍ !ڵ0O7YFVQmUkUY9ޭVEPUOZkUa]C9VEYUye}ޮU1fULHtWDMjz(VJ]ЍsV%:|2J>uٮUD jYJ 0jiW] CZU !aUaHܯZ \c%fwUej~irݼt$ݣ,fϫgs ۺjnl,J,ˋ bR %/F|+`#|MؽL,sQ<^2BS'f@<{jN;Abo4ksb6tMRa9X"th+Ն( 2%I(CaU~u! 7%).1VՆUqΪ;aUA_Cv PU}W1*d`>k42&8L$l` 7JpuTS;Q]7 ONXKTߔ㖭Y>X/s%!(QsӋs5]=T>L7Aި,QER r`M4+z}[|m5܍ܜn7-WL[b3Cwب; jXx3r[llIe2NQuexLƄ}Rˉ e)mb Qb>E)%Q<3(Tꯖaԉ[Pj0eέŝD!~π.^D5Cڵ* aVaL$rO [MTA56V]7bĦ_lP}؛2TN~٪m&ci.~o]?wT!|Pl7S{{gRYdvs;7q"vDtG%O3R'P!LI p$A&%nk]GRNcL\H$cbSeRmRd^+'-P8K>;Jņ&&'ڰA cAi";,Uh$DV<c)fc1;l4hR4RpJ\(5)`mk.)3D%$1H-^D@~4feUۘ(@1u,R;-eif]\TG OR QsBhr.V۩pxN~[6GH>zPoyQW< -Coz:RЬB>L\ j}? S/Fޭ|8O|8y;#Ap0Ջzy| q* |gZ9BK!O"{mYC˳Di{/~CJT8&vvU]/\J7yc¾-}Zh 5;xPA`ÛQ--{DY4%Yۜ>Z CXɾ Ԍ$1T4SbrAb.dKIBƬBO& EQj#sa$R4~?B)P:0#ںzNh%8e@ByQ TWxgE9'Cnsze[ZA=#q][^kLΪN7jֈ :^Q,?Wݯp 9պwf+x7QƑ;npSIW^V+zc#/K÷a$m$ C>+ƽ-|/X5rJA/,z6-u1Պҡڪc*ŌȖPaJj/n/ qWg9X=wOK*kϔ.C6CG= 4IzmQ>-'ݬ?%V79L/@ -u yO_~[ dFs"ؤ:*C*`9s<ۢiTL{#z-%؋0:,r4qOQH Lv¾r#"%(,/K}:៓cq 0-}Nc4朙9nhRmh_o.ZMn\}V{pJEeĦ`KEST Tji'@Š'V]k< z;n6E}#HGz*Zx۬Ϙ|]'YI~}3uaň{|M.&rlB)b?.^|x+&C/)PGQy Vk)D괔c3TZ*QJ؉p8w|ac(o@1%.-Η 2~A7^p)^㖱2dGFyv{ /WL+:Ζ:B%d濝R^d6:PL*l|dFpҋ)g9Uh Qe&[Qlߑ$̐($2H*C- c$n ( "F9=Qq,.Y܏X%EwS З>wj@Nd„^㨽^ݰ2uJI& S"X,ڽ:p("0{z.waM&a$ 2k&veapv Rw}bs'H P%ː}2Y`W_\ WK6kd3V]w8WO%z,,AÝ&4ɺ!}s)\Ojem\NU@O<K{L4LCל>oij#zY/ĥU2~%Md`_2(}UUjcȊTmPj F$MD͇XF#alX`:mⅇ*S-a4܋ wEyÑ^d@+\U~Y/2Öm( ݥ>W_r 5,p5%- f` d–*K,NLjWɴ=iv[{Swd0MӧHպ.\]KZ}ٵʅrӆ Hի!AHݛJGGp' pwii#5 ::}s)Tt/QH'=s$sDG-`s.ft}Vt/?SEG,uSrGiX8Ɠx_B}@Ӆ~<]}xW6j+1Ts_z#ډB5D]Y2ՂJeuɬHGb "./Er ;vЃ8:^ENh7WNX>?GMl]'Cm|x3-~7z,U~Q[݂2~eMuo'SAJm)L'(BK5$ςό6PfcΉrDk!0I}$v t~Gr2iHB^9DkaJ[MS{jng"4ͥNjBj!ZS׽0NXvA} '1o(M0fSs[ rmηNS=+RmHuRo)(̲`1@S8T,9:E ;HyիQ*G5' UW܄$w~SZh' d,qX-m>q"8P{O>M\ƶ" c7a᛿YOÛOZEFT+e3hݍ>L|RJͽs3`Xem & J⹥,;%'@@iIKccc0a(!Jx6ES ejuP6AFaJPEZ>槩lnQU*BJ {82p87{8[S,h 5zYjJ$hj8UY?PuYjND!g@UۨpYhAM/P UՆᘎQ%>&?Qc0&Y>h06dWir( LX@J~%a0VLi>,mqN~_n!"e&u}"q"#B0ǩ^D:$ti"M>9S@͍ٝ`Zݵ'ZM]pue1ۗNۥiR)C q_? ?apPh:_'9J5RIKڸ/ebΥ P'}UygDRFyy12iBL' a)uU+)k3¶T+} O *m=7\TEigE5W|@e#Q]RևT\t^)8T,/ġf~C4QR 8QB*ZP9}HV$E/P RG5'DIRD)ʹѽ8"QM}b8;=9қ<}UMdTUJ QMZzqzVZ(%OTyE=j"{Ry3j@e QʜyB#v -u3W(wYjN|K}bJ dҴigfV1CڣTyQm^hlp{aGh/ QCOT8-{FT/tHJ39Vқ5#g -՚ }ơm~ST TsB`AupE61OPDVȐ|ٺw,vj}[d֌C'?>}JCI2$YnCƼD"o攲![݇k$Hɶ5uVTLA]E =?j@6DcJb*)صgx,  4* US!(P dYbI,'4 0 A0/ H g2H9PjsZ3zZBC=%+zzv@'۱5G02?ܢ~MGW`e>-*Bh]pvzw0LN8Ї;-Ւ m@Շ`wgN6ͨf`n/ڻ AP5ְ AlȉXqDk-/޽r3 {oXɮn\3U ~5F'<XEHkt7< ?djkwa#czDk C{kj轾Mnnƣ/{C[s-D$pC"1* A8"e.]NƩJ!A(#BֳJ>׫c7PپH*%D˳]N$M%[$f  "Jϓ&0sIohofx\% ^"'d!/nn5n5rt,pCg o=m?}lEhHJbe_y p y'w?D!Ur_%VuWvE,r1=6jcZlh?v4uWbp/l$fVO˫nyWW'i$qXna`81ҨPN>9ƹ_s|> u}%1%/_u3!RE%f}zyjg%0sulf b|Ă Îi$ UB Qq׎$Wf6h",XFF%+ƥ(6}tO/y` lֱ/ȿ Up6B /cf)'$Nc8FJX:,A āK,q>S BDYcoJv@ӽYYυ^뿋ȻM]g''嗟mtK'D!%7ߑϯx "o.osq1#?_?8],W9Cޯ¥`Z={p`N`jto1If;e/@jz6gy?O3"ty>P  l]?tC}eT-oc(=q i^T!u4:F@o)mH~AAΥ_>5_ma%)pt,FGۻڸ-.eb1\g+0rWts5.3 nk';OIξwokS4oDWBBܣ㛂TSP+fO#@HB<c( pd* &\aJ0ũP/^`^:{QXXzdiG\Bi)sdٽR[EGgo=۟ҐJJ["{/sgyVF9e xFװ{6+tW9ʞ_ZXv?}3r8{mË޺oSp 8]VBD zP|\X?QQXjzJ: /:bW*Y[-un[SQ@!Dm/l8J*kn7SԩFڵ᤭DܗDkp5|ڢ[>_g'2vb j5C8Be",PZ^%wf[o==._ҟGQyY\OמoA:wyBe!1.t_Xg.ٌd/t_X Ժ0/g}hGS i Gjޭ*P*P _tE>(ix?t t4frЭ |cLB1(U)Ԋ 9ŗTeJ/;AJI,t5a>_nj>⽵oI`;0- GZ Sj klBb^^Fxu\m(L4j (LJ= ´AJ/RJ|ƍO>01[[~9Pߕ\A]c,PJY D/tKPߕ֘ >ä~y9jL UKUOoRd@!jcUdQB)ME`CQ$34(bFbN"qLHZXR}DK^ v\ XINÒR^HaRJA:*älC(*5H_/,{!Ua 9XVz,uiB]eTt _ՙTBz&X3T풭zX53{|HW1/DsB\'|Y1=]αht2)]_U]'M]#-f˦H- 3: {XR(8K ?%%M wҟicji&-ߨf ׾[ֲox ya4 \v\8b0Q|@D98)۱H*5ͮKl@҉w~9D8Ega6ɯCr.t{3?Ģšf]/zH_qW+[+aV?ؕ׿4t Tq%Ν,ٶsm%e+%Lw8t<*[9QtUU*I?zW~N!BuJ ^_RX*RRM0zmaڮkxd=7Əty:=h'ȲtERZa|ap$J38^Ș\V@JB < h]+M$p.+̐4c 7Va|/D yL)x2bvЙJoB}WҐқ B ) !ad,Cb$؃HQ)A$ND#C"ً0jp^2LL;+ 2LR.DB:<=M)Us Pk64Rp`" NuUd6\=@܄z|vq`_5YW1s ,+G]rR8(D+;;:J^Bq:\I)W u,B=5+xj PNi#lL" DB781#IJGIl=9"X=O J;.Qqޥ2{Rq65v~iUVkEY3*2C*n`_"%6=zƊ*5 C5&p4QI~v0مD1h LSu.p x/L+ACVRӲ¿<Bo(%8v*q: 9C )XQvhEqt(w!ɘ.RP#ba G(-V 1R:1qPҥN\?΂'kEi#,q0_=W',NO6O͠m>#wo7_,GYHW1Fd\4?"3Wr34ls8w4(},W+Cz6}y(X" rKhTI 膎{OYq@a,S jms* ԡ>ۉ+nH:1! -WvZQ?$SS.}>V`Ӆ3{IrfӅsZ(F;0p(eaQ^PYrk+r Z]m%<[9Hmy%2)c};K;g:ޔL¹O~ձ*30]sK)wsڨR_ 5յsAT`zD{_5hJ8f{}y%ҩRqa}+jcW9~ߊws8,)\"REXC4RK3,`k7 _|t)( 71*yUHi)hWmE/So#n6sd/YJ Ɔ̔c-RN}r\߯M 5ۼ҅T40M 4jΫ٣o\  58QbcIX0 9SkNtX8AH%JEeo@ !fBAڈ?rq`.z/5yE PjA BMpvdӭcHyT Ug o [ ^?y-ve{z5pz?M!nPO4L;aEةևI*ͬ$J#tyl3E/-?OL0&pDVX`bkT#{&7PBn?(]I$R-FW6c* *0)f]nSE*B_CĶ4T'OfwmJh#.nNLdrLʒK/@6EK(R33m",ݽVIG(/}\;TQKG_~ZFR`ZI˵3Ax#~z֩#ŠCߤA 8EL}ו"_~~HKfG4*NfzuuJ!~fAg sHyء٭9Ovk!%LQOL أҠY& S8og_8pɊ w`q>ݖW<vi 4SH:f )ǏVN1Iij,z}C,jbв{Ɛ,+YRn(%kd(?GNN̺yz686N#B 1TZzؕPM(Y=W[z&:;F&A(i#<ѐ&!0N1f^$TڳH:Yk^&u DGS  vlFkD$*$|Sjv~X=%,V_r*#zJqYj& U{T*~R8I}{:* ^qu#w&{OK=.K(*Đ5&VŊD_cbݣ]TiO⇪Tj-IV]T! Sv¯zZqIj$P,֣Tq4& *9fR+zT* ^; :Fg %z\h@6Pwy/DLLJfD>@V,sd'h1?ͤVek2ifN1Qd:@ vQ]zφQUNj$Z{z#qOE(-7wҧC)5 [\ɞ[/,5J p(4.nCOpjw< 5礶| 羓>(b:} =QdG>~ '>~Jﴹ'J;åDNÿB*qOTq}=Qv3*Ox&<.}3$gx}³|+J}³|h&s)r(uyTkQR'5+v빯 E.hvs5;mchDŽ'JS5~_HUvV=b"u(%ZjA#4Hb#SƋ)<*jQ'!HAĄIڪRF 55„jHT%2=77XqAW'KP= ,ZC@5g\+!a}ΦO= -s{?G8נq pj4ёYs8Z- eyzrщ=,n}nie,/):}~,*n*TK뿩. zZqIj$qRF龻+ Ԛ0RKR#߭[[=TyqJ۾_]|=./oֲ۩+̳0`~u\y`v],Loe.y'U/k/UXje4On5tϠp^ߘ>W/^{cV`|ݘhCbBcZj^M,E(u,Zu,c:FAʎi& ՕŗU<%tw3;Ȧcpe>X8ueD3ŭNrw.qfw;n{++C cMYoeqo[ܙ7hm{$_˴OW_Y㣉ϧ3ȫP}뙓ɺ#aEfyyun6sm.3-GbΉad݈!bL r(C’0 >lS)~ryaزb 75՗ah|d&n˵{mPUn3VyEYG3Qv=^𝺳 מC#Z6y Wʿ`FơA$4#"PJs 'Dn$(1F*U H_=,V:ࣺGQƓTᣃ@);%,DD 5LBh@$8JN$X"QHSK(pmѧI|_k,UvL}UEVUV#`z$zGŋ4*ݦ~u#kiQXRL)"9qi2 DjSمd5RE4MT OY_+PwHKCMPFlS'D&F@(j$Ad$6F$HDHdGI`}PxE4#BFbBu~= )r"?ܱ{?_i$N]A|=jO/tF\P %lWwtLq./&ɍKj|X:_~F UN+nAPPZwX**3lDb\+@[ٹ+@[M@h-VTbM'tTTaak;dϺ:!ޞURfF/$q7k+6޿Y9vR3N5!eY1jݟqѷb2j$ ߊ]m 呎<ƲWt9e疑w4_OcRSXo~=(G~܇d># 6-x?4Od&!Z Sw|n:nuuϨn\E@k!٭y$Ovk!%Ly@qW.ue S8ogە_8paIQ`q>ݎמy=d_" Y'RZ*5wh7=6NIZJ= +݉6#OK=.Io= 'gLj.u}'}xwץme?-$5bGITz5ւva~ZqIj$3t3kt?TGh<is_!1Ŷ܇ +xImgh[ }@PiDDjRN06w!j߅μ ~߉ϨXY3#D'JgVY=wR5܄;)pJz;i'#!Csֳp0of+Ϟ׹ &HG(tGޯkVCxl`&y5[eQRg[̢Hg̲9[e(}L41A/wl(iw'{A}#_Y^)4ex,h$q+eHM4\d :7;UW(n6y8|ӿv<16^&R͢Jw3Jݵ^wfBFF.dW=wD|z^d!"z8c/=D61ﬖ*~GVP.7l]a HH)x`?p 1 P( 7jW1*cVʬ6WKR2JDeV4 Md ,!aR_2רY@HA:jJvmEA lRT M CPSeHBFj }68S)2юg$>2'fRtm9r-mo;N tey]1o>3ܣTxkIVP 0^r06%7!`bWOpS|Dc''8d!XF}I$\JD&H*kJ( J:jtqr+3\c5d'_,X; C)YA*~+˛^꒍$kֻߏg+P/&ɍon/ϘUВd1/;b ޽}a݁b-j6[f{Ѣ=N2"G&"Qx69Q=q?qkBDKGs \ٯ׳ضYM(![|?JF(*o[7/ozm7ֻ@*tٟ,ňr^,fr &[NRkVC:0pF3AV C{Yf+2y,wϧT5CvݥD Yľ @+K]7bj-qWcɒ0AnuGՓ?{O۸_򭻍ybvmwL?fulcO;]/)9,DYR2j6qhsD22Uc$ gpEٓx!QWpl|1BS#1OD;f#X3ď?<aTsv$w#s+&U&F,]q#[[wE+bG\u8Y~@{UupCv"Ģ#x5€uNE!mf0 b\P~e(&ن;ܼVEuuDq)YO7{e1 `7cu`^s[xغI)x./.ϿrDo2ݟؐ|J^-"Osyk:z9]5 7Gdv~㳯 &?YY[ ⠲q }sW`*rV%uus @;R1d>e(y[49*.bkmЄ~IOVܠSΛu"A#A؈+_4W&ADwAg5Lfs*/G(PP+dA$9HqCx$[ތ|sHV:6xwxm2G. ]?Zcla:]-8u=&֗)IOrvY:$ e#"ti}qѪSY}b~Ug5#+F'9#q.˳y=r,2AlHBp){owP}n4pwTnz_ʎlM8j&$ Q/"vQTj7_;h.Zn]H[hIQ`(ϏnDVzV`~녁;6.>mt~=Rp:XtD>+nA qe1am %vLv2C[ |AqebS{ujU*4Ʉ\-3@86c(N,ycYH.%$ )nM՛=__'> Z@K{Hr>k73NtZZu%n.ji{UE9<_ݺ(sx.a5ܪ~(Ըòg31=?Qb~)uv]G<@SHе^A硴j$\mOEy J^;xԼ$ٯw4TMCmNQ+ DZXXذj;תcH|ú9ZTYbQciqRK3U b}m>plLBΡJt}Zct<+Hb*BE݇[~h4z\B 4g 1vR3JuPKP3Ĥt_&UJUqO u!nRJbjD7)%$?bvuu4CQǽ@Ɉb6:3ƶߪpӥ9C4'$Tab,$  b{-&!oIXb9vBy/ u 2˖ 8*9xq jc6HUR/!XJfRuq j0?ڕ*@Ю n+5*a2իx t<ʠ&qC񌃉Lbr<ȠA U-JD.ٺb@C=ށZ [R08[!*ܤ*:Xvʩ"6[e"$nkMɇyVJnREet*vctJs*;y].uV6$MouR膙GEnO;./\p~ȯ-l##45lB:"tX.xYFQPh!~6IOzwYgΐe,th-FK5Yíǖ֑//÷W꣰%? aKT}UGa%0+.)v1nQu5KTav]R:& d~\ +Lɐ%1zSY}}@Q>]{d11r; 07&4}b#awzq~b61_fO2frVHĐG(R牊RĀw< (k셁nl ͲGA:8#& ^ ^U] @@($+RҸQSJR H2j '[LC=̲b'=UMͿ|h(z4)3 1 FP'@D`020ef;kjC\rbȨXΘOC85xӅ6'6z^u-^4]ކWz37뿕` dep[TQ8EbkB['O4f %*ԈIF0ORfHp1[ɑT[ֽވֽH нrM ѽ4qQS)0"b[IPC2 S#E">ddTkOBǟۋԱݡb*{^$D6 <-,_}S{NȚڼ~$"ckr9Z 'I`O(O7^]$NP3 Hq 7 `zINuۏ>Xm~s ':S0ۧ ď3m-sfdyj|0Nkn6ZbX?i4 5;"΁r{d~InG0D &JGn4s|H [Iۃ{>IsG)=$3gOKD :zT w4qΞGeQJ.zLVn0>:ȟms8Y2AV n8 UDXӏWR_yY^:#%J=eo!(yqΕ=%[X+s?:4a/gx:HD :%qhL]T$' -_ǝ}3}HCҖηFCQػÏj>Ԃ3.C-e񒶣Uwx1$PSIFΩL~KP.vu7>P bru]>ٶq6}bl\-z8u=&֗)\Iʗtu>}GE><#/{U-O, d: ^\7:,yva,x@zY`e~֐|"#S [&ij7_;h/S[nڭ EK轇\nvv󥁋v.鯀|({-F U IiijB2W0qaFHIıJ#PETpco 8AJu#,U F)B)2$D\4CupZW/"7n[5&x5fƌHH]gO=fC=.Cڕ*nRE`AcPPMjUr*S@N{!UTIU5d8oWIJ) :*.jQ9vO9nh.Cv>5r9ElVbߟ;ٮ9$&hule5H" Ml](%&OVdmK꣰Z%~We A} P8>arO)v_<bԾ'\. m4'<ʧ5Yd};svqP~Pt0,lE~8ϔJՁ&mߟ/ *RH(&5.NCUqgo&Z'^[[εB-ID}9/vc[/mn =8x/ "N6vГͱRu"6 J1 A66s5=u|IPM4^3x|b^b`G$IX33]BC5CJ(0H*R4O0EjDS N1CYIhs9|Nt"Sɥ$˃fȗC݉iN!gH"$a2&* F#)$QL U<١-q-IaNt;zJْֆt`lxuCrOGb>v$n;!} -|8Uanz,^lFYT䜠;'\m?ȶUmؖlgqno P}<]mo#7+|ٻۓBo!+noI&ѯ3 nn%E-ExŪ"XOf.F/@RhQ1cB[4Ye3s ̐!AXn3r7Nê=W8NrttWout۾Ӈg 3KG16-1gGMaAH LfLpdgRK>ޔIȄ鹫-ʤTc>E`jY'j,wakW[y5m\,)5mJzqJ ;@Ѽ -/`xW NR.ɥBjs={a$G2I-9Pˍ< j.sa!FUi^es;"֫$ffZ26$6ldp{۱䁚H:tYh_y]j~aHUgL]RK=Kmv8zա Jf87^*T%W1..΍lO F`KLqqFy\ «yU!5(5.zW)H= «3%3K w U,ʐZ-dHĹK"q f,Aajg9$̹+]=5š)Vϡ((-Cgg{GShj3-ltv}<m1ߴuVnwd{}j@zIk!hl-Uup,tUݦ*_Eֲ3oMk cnNn:<Wí,|]~t睏wQ"3M88p⤭e.RϟIySgIeB 6yfhyy0g_ l0`*f, f4Χb%K=KAḗ8ὄ%z-۲s}?ژŰͫUI=Km#4ҼRz^ڊ:ȫ, 5*-}'}N]FJXG+\KaľAOWmpطOyMjd|8qCݣC {^,Ya>IJ{D`Cd+.FVm :bn-h@Rj+G;i@RɌ KyGD3b߰Gk矛10h <;?U,wזY%6b6vbVSW g[Ϟ哧ۆLz[.ʷ +&/Uvt[y~Vn*^37+HNת]-vy2K1ꀆ#X 0ly~ c`{V(K/':SpmƦ&l[LRi;@KZgַdI泏V^2Svu:MKUH rZdw=7S8(=g,#*7 ^-!5OF"Y@A@'q4@۳Fh_ՇfQWhE.= h~ !kf*SGOՎ$7Fg S9㱔<I mMrR; *Aj Dɮ Uώ m%;N*'g WyGMR0 MY~i(SU$dy(i(h"}U,_fNAxK:GT4.5ru~-뿫ib0k}Ɩ1j)Lű 2ri q(!$@Xf)OόMbQمA䡹%felySpqyh &d 7u*3X1Xl]pig9O&qLY$Re nO.2"16NvѢVUpqjeה'}mWKZ~ofFr@̊cs?-jmEmSS0ۓo3t4et~ Vh>{YffݻS0]H+E>_m/U`RA) @Т/ x{غ$ ?1õi{lB_ dG6} q3X{R~A<>PZLdm 1$l&W31bsQ-ϋX[;2w62CkK  :D!5V'}`KuyASfp p=^@|JO \ڭ *P۫V1.T@6DNG*QͦB0dChy~U=AćU,myfTW>gwN7"9wOvo5 VȲr۟܊gGlQB6jӖb]f$[SYkBW4+֏\y򶛴Xym"ǻtS5P9UO*g` L~8+CL/%q2}P(`¾zaDh'|ՅKmv. Z..r^%ilv3sGIh{~>5cԂUA|&;$_\M ϽH8_&?y?Rr?3&+~䢡U0lZ[ɛG.R? u!+dEii1mgԓEUŔe'y.ɧPN?G7DU&vT ݺѭy.ɧd3tC^YyDttFt;ǽn/ѭy.ڧܷMًT,Rޝ_fKkħF/߾h"foޜt&k4ׄQN# 0k# Ű(i^NJ hK=I-ƪΓDe傆XA`/+ k?b$bE+v_ $,ZN#<툮WK=K-[}NUnTa~5)o6њ a>u@)c# }4܄qqȸ)OKE饁}襢awX죑fL T' _~X}}Dw Yfkae<|D )xsHΎnjjxUId5$퇰$/6ӍNFPX( ˜ ]Jzػדc0aZck'$3thɥp!KO]ḩԊXe}~*W)_x_\ʙ:Cd@"gKێ6 0yD7|YﺒL\`EjMC 1qpҪ8{@Yil -*%Gzs@pj\ĭ ioҭq7啬prùNd8?W6T_p&{~˅~L_f;b5/k^x׼rq3G6`QHQ.`p?͓D3Ж&M$hj/~4?&8c:kanM7|o!ޑ)o7,y"z16*͛M(hE&k "c/`K͕1c{b3d5ݥinqYk D47Jzh$̜A"Zl/uv5z11<$^7,r<;^m1oM^#1.> Ҁ9rZyjj5ÎQ决úUބf` ^ axjXDzx ֲB:|pBf+܊=E]iu`#{pV<{X#-wA#&<䍻h'|tѭ(M툻gS-CW~ևqmSR3y [)$^4:hDh/XQ;hEmBVu&^L(k;O\@+@Xj[ɚ _z^X7s{CڸB|vzw! E@Osz^UC{ }d4:/H=IԠո6D'W<9X}$N@o#)+ș0 #?Jm+G;ECMR,A`jbAoP3 Ɋ[*Iܣ$xw'+, >I\ȅi>~8ūse.*o-K=Ka40qE*^tJ}\ѼJ1bIc-Y!UGuRڌbWr*e/H=IG;5HRQdx '楢1a{^{^D{^$y -1;mC ]}H\<'+,VTay$J-g%Ybo!}E(ɢ(l(P,*8OJDOfdMUJU$٫$qk"?j@gކUפFf՘֫hc|={|l-v*K*vda`wZgN"K+-raua9V@&WuEuDp>ro9lPLpA\VxRvq]Dw`:yӏj->__lGGsHG7Ij% '['TZ h.b۱|(Րjx1]iw,V"$nʓQA]-ϸ+ s=i?$`S]>}euXf{SFf6LjWTJ/Ȫʫ|6*nS#'EгGjlVzøVwVg.m $I'/5y%J7Ivg"V:Nr_lv7lVxYf%iClR?Jr'OL_͊Qaq:{ۀH$s&KU*YD%w m&&jƩ)GN̒V*/{X"HwO1:ԽbbLKg;h;.S诼l_v !viVw.37+[>nOL(&1۰vJ34d3L)S* Qֈ)%}*r)OzHUF76R(0҅RXImFGNK:,YFϦ+P$%5jMb,\ L ;=mo[|wճhU,ܕzEc,'%gDnG|vW?/?tׯN"bȼ:s ۿ 8KtZ(sbBJI}~X`+r2[Q=< _b?xO]>#VUw7,߰B=|^` COsކ?TJ") ͼ޼\N"hU1nd:B)x/4Vhv~&r6hՔiچr?) C )#]|ߜ`gW /s%=|d_S2 Xq3xͲ}wN/vZ9itW4XL@ATo*s䎸@ݠ%\P<>nmi4ROiF7=AjRَ&?;~}ӎx{uﯝ- soR#Uװs1^~Z,>i_g~X \e-]>9cO#rvy#‹^/K )N>(Aeߐ¥ՃCN+ZoHO E$ܿnp+ 0[ ?Wni7 Q퍶 5M9reygWCo!}f:d[B:՚}${R=2@hֲhsR 4jy&Ixw_Ǘ,bѹ`z{ԔV pB"~;Y"?k{~>h_W⦅ocw{uJ!*BC:pr׳MwWn!f;)U$H}/&yP׿^n};0A@| VLYlu(LģSn:N;x/^ѽ>[|cw!/Qj[*wwɫwAtbĻ9̻ n] C58'=X(ًUSoU^8& Sӫ/f}Z&OBi,8|.mgZ|t.j׳+? e7?ɁH'WW1J{EQʛw;ʿ7w?PJW l L-K'ѣLRْ$$-;v/9L/(Jm1B?EM3r?0f0iB߾ &MfCjnsFDdsX8ԡuU9']wU%rd;Af-|xUlpPDRBtD^Sy[B)* OҼ㝓Q <u^(L 3hk(NeoTA)7Q#n Z7FZ!n%B(,Nʲ44UKeZ!I3Q&x`~n1Ǖ1_99/@M34UJ[d3"RN|i0XR=?,V yhH1aqHS%WK'FMilyWw@MRƲZJSHHB  wfu{8`ƴ&$UcNuF#;nS SjG9wmugTq@\|Wf1 {uxˣS\ 0R Ģ% ֩`#|?i2M]iR,1pmX*K8Sq$2 /3͘"$Ud u2i-nLZ.SAa[i)O|hQj[YMS-8fA6,xo I& HQ.QOP8m=tdO;i+ɗ<$ެa4p*oh82aax='-D,`ޠ "!*,yU%USl9E kzqT3&lΠ 4aTɚt@4ͬ%)t `Y\= e#M/;'tpnBX9Z|O 0zb:t~ i#^8mhF0} ?mΞnh(0z DECVWu7o{e^NtRE{R@X\X(A-y;OԴ#H(a#c(!O8ȎҬй9ߗZaE@|Vw4!o0#@ h(|)[B&mT[|rw!/1"m ) z7I[B&m)ؔ[|@B^8Dw1t;\Id>&G3X :b yяK}'5O=*LP}*# 0TR?f6"?oC5cd"2 U)!DpXT%rTi*PEs~I#@a99Tj8 UԚQ/|9_=L@|H 5Wu A@9ao ϐ"9P@(%H,6M aIR LeZP;](y5zׂ 0} VJ+!"q_4äJJ5Р.ߢ{}(QWln7]s?H_i hM .|nNJ-iJ*ZR차( UAd-2k$rL K>:Ձ|b)XzX7h"+j2̯p=nZJJmIė䅕D"P5`0,X`d$PC ';#FU#mo?˫|L5uʆFP"Oe)40c8c5ϰ"p)%΂Sf;>$QUM)4A#t4y/ny%/~_|l}*/ H{`)BlΔ ~LKk9Q2x2eb*GcI7#O$MU@;&r~zӠvryx]}N.[3On.n\yɔ]*Qz>A5ӜQ&BUP4FAs4Ƥ&$M1Dy-b<"Qhǰ&Sc;1xXL)!P^=Vv-;y { ;zs'IMjx:,79C;Uֽdx+޿"N֮mY/+$nD_2RfrUPՃ#V0/uvb3MFX mΣe֒a<֙#{gVg{"ZFiNg9*jIJ]f'u(.!" 59wEjN,%}H)FjQq fr>E1n `Fѻ~mԀej1>uF$C| G6{qJ`Ԕ>#;P@ɿ}Ad4>5)($vt_ray)s6fw{8_߿}ˏ0o6r~͉*!^ޗf;/@|dnLeڽ \\^j/5X[vآ&fKؙc"5Z&JgFm!<i\ʄL.ѩ~Ÿ2A1 6Rr$:a`}GxM+{$I»@ h*8KJ)b5)JǷ%LcdGʿ~=DT< {}s\Ss0`pX[{0Rs3ٻ6ndWXzJv3WaMmyJJ`,P$I?D F:djin4:icǝ1w@j_#onv{.b*48*=L,O7Yv6]H4C돐'Kϝ*BEHNHX? )d2ISC2QH.;gmS%Xg@M$Fˑ[Tf$*Lb,BF]BJxRZ\#5QO οlg2;HPVM{AxfM7/78#*U O[-DZyzx!8lG^]ppk \6B{F2M?EjOd%m^X:o!)]* ZA )URakVw] 턖җ|S|0嵕 SA.=ZX߇Oń#ѡ c'av4|Q]AּGK^}=)~GwBCOm >,^3TF>r>"Q`[~yJb:]C(귫\/Ԃ'AՑ~o+ux[O/ ?B\6DT"1=?x_f]ϲnXX6MfVI%S# {TirISDb66Oil}]@6QK}$\Q7nzWi]UE=C\תџ$H֭ķjZ!RI@'YXXQ;NZBW2r9N )aVt s, wH/j—=vW }gA2*]K͞ L<ɫ!&Xbt3'i+%GWo:Ȯ~xj PO7o<6X˸VV$NA!H:DaEԗ)BPOCqqWn20q4o{`8S/"+~05'=L'N,yL@S.fƺ\'s%[4fd:Fs7>.g˛1^]_c"_)3o,xYǁ }EDžۑh x|dB ]Xٛ|MWH[/27WO (K,{ u-ThdFZp8:NXcv* yEn8^-*6‚bAXb%k 3K6<5NRТV3͑cؖ&:1Iq;΅)"Hgg>8Ji$#KElssRRJXh$QN(W$N <(h ow˗7-#Eĭo{2X+=`:׹UzwyWn^ ;kY{/wi~͒Z ~]9YfͻA3,Bwk,XR$"(҄(R 4{v|V!Cn^ !t!4G6sczA>7IG2kxƶzV]Ny BnPH$i7TprU" K[bʙM,"ܤ2gE4԰7GB4eq<< Qp|eo\e})1kbMpLX,儀XPq85cbxPX jgI@!eZr}3X8e3F/qJdVig5J%6`qD*0 cH*u8l'oC{D}Y==H@b9v³7>i2r%o*>o^]_F^qϠ N :Yx/_mqF6V^~L27 <$!ZgXoO5Yo[%JjI a>$|mg=pYu*t@cNIn[I=mI=.=.dRLF* .-4D`! Mpzۆ:L"5<^03Z_hͳrq f]f*7j|$quUisYTYz^{7*>ñ kP賙/B7*0yKSt-V^,0`r,b,R(f)S45OM{Xl ~ofS:f?/T C^"n_5/x=$YAx%B#;4:W\ 6ٶbt{?Lf52yP@8j5EyKlxjA\WQ"W~.U^w\蓄vE}$ TcK:tK'MCIce{ݸh_h*b#>**! FˣȊex.ϙhFzIZR}$e t~P*uGӊhR{~ԁ mEV|, 9ړ7U?]߸:FR:>5󺏁YLtݕCjޫT1C!_ ^ p مe+4>rE&x(yFD~ҹ[]=j7Б,Gf*M G_nLx׉KgNXL&VkK[16ұ4VW `)*T <:nX!gTշT3P-kI,Pg0=T"y='J-d[J-8xƋ7O”(VlcJsGL}r]2feg4w PϓɷQ4;y .NljqB%"NHr)cJ]%s0s_Y]1WS+5P@?!tĐ%b&ףw{)Tl|.kFyi *#l7p,n7 kҊo?xXK pB`kIN^{KV RfUR],^U;2jkV788 =tڶ2eZas(v+;+EطR A'6ю*-r8,%ى>YakߥZ 2#]8ŒDĄ '`j08kO uJxÈqQIx^GVY;Tj1ҥ 묢0մP; TN7l2G(T3Ժ}/S}M5,>߮*LpVa|SX Xjpp6kaqr[ im8LJ= 4(HZT]ZRR%u߅=/ChJD2u`|T IP,Izֱ&ɒMMh>1*)hi 15Cl mxbH1Q%k!cL 5(7 k$Xǫei9KGb>|L,G%)0N%fBkfĹBv ې'pZԧA0x7=-GnQ:*{(5kZJAP!}R.* !k08'5l9b m(H7k{aCĵ*$Kφ<| >7<\>dKU'(8&Nk)5)bSCv;L98%$Br8, T!@ZC>qb {A;@\V`!/ޑ i좓H)f!jRu~= 'fp,pA1kFo^Cc>(gTސlT [L&39!mf*6gb]̹D(c-1{R?N05\1F2kYes, 0=Z~_!`_~?I%'=.<燙9SS 8ʶQrf60&(QT^l1!IkҸǒ$6QDށw_QsbKl=*4ⷣZz`+32#%5NGp?Ӎkr?u)Q!=FOFj~qmڲ'kW.:TKyIW1 O1 WP zPa}ەT"F)<.#ΒS'Q3)RIX) |"ŕҡ4Lv)F&{Ū'>A@5C4W HDXeC 3NS/`|)7 0fff揂j`f*b*)Kh,$FbĞl=Db`dKi ϻaˇU1NjO_jۑ.&&Ui*8wE›_o+ &ru3 YfSZ8ԝ#̖?o0r dlxi~~to!"A_Kլ~¾;j3\7ムL>qlq8_|Z쎌θlg<0& 3r21cYghj>cdn2AWĻ3Cqbv O7?eYf mz)aTxj=}hHzlijpjpѡ (ftre-_u{zܾxQyq}#{jۗ2xkco{/ ' l ޽ᕿ Zh,wWm9y*ַJe":Z(frS!`JU:JJgzP:RY?ZJ[JI\NS=ݸQSNy CͿ|Pxq- h~FʉÍl% ?m9r5NN-ozr6(!/uSQy覢gpSQ 9g úNJʐIP-slWQpBNS_tC39j z9RJ՗wvz fc T{޾cGTPځ%a-*W]ɫswJޠNw-QqXW{ ϮuXD+ٔXˠdpAt*ө2֠N_TЭz,'a!sbSxw&rjD21(n* qǨP Fsds%,^u"t7DG)w:Ɯ!0X \13$aQET#bTzRԡǭcT T3ĀtsVlZ8F`j%yZ UYVww] [+jƾcT T3þF<Uv`1ݙꪱ{Y¸؇=V>|U\q%8S=(Pl{.H Q JB>B8*K5Ä MZ죞+La_GӪG`FZ}ǨD -l1b΍}๢;J.[}๢sF kRI#+>gjҗþTTSe}W}ijqbNjžTTQ=o؇= q؇=R0nƓaٰ ?+%mI53)$SXHlB Raj0A -f<08 N0cL-NqHBi Im׀!|\ ah>ba6cds#vrc73+DbI:&pC=3uj8m(h??Rn\ulVϢ˻JR$7dH/f^Z 7z2S Ba^t*OM0\EvA&ljǪUjĵO^8!;g4z bvf7#*a<2aoU X0ksz%*zO4`<J6,[ǩP-Yl5kU~~تIɉ5[_;NFvZ~VEZ­**QmvVըU1ObOؿY%[}T+ymDtk%/9['vP$P?,$%2oo~_\Nfv{8/f_zawx@b2 ^N/p>}ܛXأ\T?/fagN\AUr: *1,h1*t},.zbse5CF1@lsq=7/K/^cG{, y/ԉE4$q,㤷3Rk}MQ`ќ,Mf8q^SzMձOQLspϮ2R\ʍ ׅiZܫ5K̢2ԍ"M7?^l<\|0vj .pU ȟbܢ˙QM4?MshMshMshMs7ͭ JE*̹8% 0h6+:0[j;^^٨7}/cy7V.s]LfigtPDpk𱲺>+K ռce硒2TȰn^9*tb]ʪ@@W NLQ9jJ|" Marx;WULsEsam.nѶ "W?"Y$٫'C^jU#7˅e|LIY3_דIW$od#()/͓AڪjnӍkq?4JN,Zd|OvFa q5n{E ?` TK6F;aALΦXG(+G`gQV֓.aCDc{<]`2]E`m,۲y{*X8?T}GFeLI94Kc٫5eĺ2,q,;vB6O>kHe=Ѩ~ʁA T#ÙӼIb 8EV/u)v7FGOUEuK y─/߷TкbBg4nmxnD֭ rFqѯ:5Aubĺ d(ͺǩZ6)z8` 鵮,W0a%^fwΛ "ѯGQHʢXSX<tZ`@2wTdzHo_XQ[l_JVS%ޡ^Ϯ_~gq}o5 Fv|K'dAO xCRuE:6K}%QS>E`8nf kWJ 7dbUcmUG*0VAǘ` 1 V@FdeUXcQ1jR zj'd`!=oU2H)5'LU9%W _{9Z^t~rnf XjQof9|X=v1utt_n*Z]99L o2O[TSXHH%FaFQt2Qa9Q t $8NU"ooJP䜃s#Tx&ܲs4PE;z!&5'ތ7$P_YEGїqjRsBĪ~YEM`Q0VR Ӕ_Vm~֙|S +Ӕ)_c]胊I-R"$jMR}pUa4"h`Jֳ(QzhB`2JR e99`\q8^ruW+Ŀw=Ӈ0n3 :WY{82 [#;nEc$  :'JtTfxD㰻ZUSU8<۬u P~nH&  Q$ϜsYk$h=ˬfdʨiVkGZǁ :Ӎ@.@Rg)gȈ$J ,SpcRUBdsRl=J ~F20X !현}K zsiUhnGD GcYr`",2w 96qoRB+$MDgͦ22mh 5|hå>ے:ϛ؋*1F*:\^kw /Ӧ#y{NjR@{XH S넞aX;1֞iUI}V(>#*x(n}h@ˬ爢뮶ҝX=4*]ǥ>I#i|~Yt.mcSRC8kVE[v(Xgu %Ӽ_Vea2 Sёت,̵Ԛ)>OV7$ʷ^UK}Vϫ)*,ĬO@)i_C)ѣ?iYQѰ,LdbZkFÎAuEA39VR]Ī~YEx`nU\ bbU2>MO BOMDYDe1mݻ&ua~M6:|,㲞eEw]V378%mro"f#- ]sL}3T7 \W,ngo?|ɓܜvyQh*EWIyJ}hSCV >ycR=igZyiџ/mR$RE w=xWOATTOO"Ь#mL+LPi3Ӯ&K]q+{qˢPEs\|݉{PhTdBTO)B# ^^ς9 rNސS91RTbC8Uﰬxe2Yv`r'KAHe 9:#JcԞwhl3=2ۣ+B?GJJd 5K)D1 y,Q]L\;4Zi4@R*^;C_×z3{P!"W˓U[->v b,=B. ;A٧_ޢ7<_ެ p[zC(]Z۳ qA u4[$7#cs! .#%[Ss]fWk?^^v)SԶ|zs_/_:]7\ݩ$_IK/-ʮU9x&pfN-+?tZo .Ǩꦻ}> R:gҠ FptowhrNy0;tCm7r]n7h 8lNЗ'0;>]^3lc>v r<6fCEg0Tf3/Mp,]q%H]e̟]V* tT%a|>,n;,3fun?'[2J[Z>aЛ ~+0>'m޹ozk pm{ީ[h}JV>CudF6~]Z пY dɡfiPg& QJF2脥3p|J'◝'A|mo7м5ha5~c"wb"炇K|T=qԁ@Ck ٖԊ?Iܰsgbފ]m ݗu(Bי[d~t:&gW STzp6P;}QE3J+wS|hT:}/ג L~h t[GRc2~S(ͭ*,Zb+h8OoY7bBg4nͽPެ[|Yuk!Qz3κuQXX B&mU @eo->S&кWNo.5Z(TԬDN[{NSi*ITR۔:H]ۤkF/svɷdy8OO֏>TA='g3RR'o+VW) * S]V6օ.+(bHhh\)t_G_auhaZ6'{'e$S<1J3 '5HN,%)AZǔ 5N 뱟>*f .6 M vq4QpHΈhrlDKBuq qKN|ڙ&g#$Yƒt'q 1!+ ߒb2}!+  $ w81 r`W &>V*Fa&5dI ]<_}?pQxBXlA)ˏvGh ?Օ;x}jn ]+(:RaJ*yFS0BQICepMuW-j+.1JL4\J'6QxQLSee~37L&9'}"ÿ́V~? sҁqjXhI٣_J OF}"(uSɌޚ ݵ"FX? $ɹސ>҂v y`/'s"9z/}^{8i~̅68wǙ&ISSK<6aec !MSgU3E j1r #O$Բ3S~2dµRM, k4X^ZY'˄~N+NgF66N8PL1f L,t`CUf|jXnmuWTol Wf)Z\XXB $ff|38NS;PT}(B + O8 !ʬ(4U93B2j8O> uF0!gs`f( Y[$L-7H]%yH_KII!$HD a` *phhJ#9SǮ]V}o&uuljWJ-52{oa%l} D Xo<-.kY7;ȫ9_|5@_Oz8chϧv'.(QP90*Njl3G ?=iģĉy1`A;P^a6 Ug0 mh|Mk$mpźUvZx_G$нfxWGmKWNbunuO_'кWNNqF?ܶny@Aubĺ G_1кWN q5ăm/Y/=.\s{=Z^->ݽ30:Y}==:,O'~Kٻ&7#WeWSG5;D}G5qHnoU5f iMrY^eVefnw. ^ PEUJAJg+Iy kA,t…qe^RoSOQ/~M5R~|sveoFayx<˯wgt",lyZo鉠yl1 ҳ·yJD!jySk 6}u[U UT~R] U 2T]q#?7nyj52[I|C;+Uk[|Ip4L'C &twFݸ9/V5N:^]TIya4c*- (z_&YӶ>Y謴=zr֛iԥ|ine@]UM}ȣj;bRUSɢͿ^:\SjRPU9KTU\Χ!1túY*O&9\U]"9R!|BI.ՆyrYLU KUTOCjWvxVVVvB{5c`hFKSкjV5nD\_j7wܛH ELikLPoTIi:j5{)mm 3h#L-p>`7T4Fvm 3vk\nm 3h#LM<`7ɵ5:MQ.EHPݕ#4nm 3hmL Жp{ͧ0oxQye©2b:#"( u=8-,5jS8T1nQ:%,5e\/[Z U05WQ zZ6QJ(>:b>%@jZ c^c^Ct?O c\j0pUsH 1z}:>daq1QΣԆ5sDTNaӒ@:>*pnmhQCUn;QDk]/:zZp}`B3U^p)%w<볺}PnN )2=_Byzn@B&>@T &Lch+5Hb͜rr"JTF$ "i#_usa,nk7ã*J*FJBjoJ|u8i{\ }G`\mq*j©l j [܆6Q{+ݢ6Uҗк Z)[mXxRkez™,m DU] w#;hHTAJ]7 $JE^* $JE(d|nO($TM^pPHTPFk>#C\ؙwZiIj \I?]rQC[yieC>D) %!Q:YG*F Qy;]7!sY?W|ȜpgN @j>YNjzR_2- H)SCnW=+g]pՃ;)%bྫ>a猰~D'ߗKٰB_3JA&Tb.l/_=gM>$s놾mm?ž`L1u#G ~.~ms?{ƃ'y"Qt@ucvLL0qea~݇fatok`Id!^򾻷atc:f׳{\l&6M?&`!/5KUhNp &)]jiN%fI&zKϒf6mf M+p~{yRgfҟҎYKΕVTՇe,l0,xB\؋NcSh-xc3Q-Foy"Q4ZG˅=k ue3s.G_ʮyg~mr3)\nC`42q1q34NU0BmΈ@UHD$u1|?MK9ᴒO%{͖6zlWXyV_n6tNѮf0_~M/؍%]Y r3Ϸٿs-lyMm[I,edf #L-ʢo|/7Qf^ {{AΒ::8,?3_ş4l7J-/[-s;6O^lZ3m ŶldعI@-ޅΩGda:ڱ`$ce ׎m6Zli8B)SҺ*" wwU U oYRp [cAhcH|ɍ'+#N[iYᦱpVIme*tJRkTIJ8qp$i"YDŻ0xԎWQM \|:=^+*BV(f1tFURČ%:vJ%N@[>`4Vj)w܊JG/0/o}ս&eMo{bϦA=T938oj֞= O~1gxu~?>$7iQ=dbQ~ћlok寳huO6brm_~sK,)>i]RYtPdkQ3?չ?lL8,t{Dx}26?w\.*q_zm%M/ 5!kS?ʐ2.걜M9.{bpbow.J~\BwVhjc.m?5]k 9<gܗN{&73l*I&냭 bđj~~]B >]N ,F hz5cn(YlUX1H&fXZL$vO;r}us[ݧo^n1o}C܀ѤZ@Iᮇ@-ʐ (/iRk3 p(1U~9lQbo댓9/i*E5^X!De#22DB4VUN]z8ΙyR{ ^A,?U?~l[?l>S{mq1H#1%FZ"7Dž>SW #갊&;,?rl"&/Wv] 92ԑo| 22r\aԱ- sm86ءЏ;|s uo& )SM(cK٣ ?)dA:Ch0t57``U>)Ě(r)J 1Dh*, $\ri)9<5d2%Px @%q&?GM&:dtxo%ȦZXIkHmLp&>+T1ulyi"SȬӘT$J"O\1_r7i[$.|c?CCO<[f(D]ź7?(E4\ɧOo:᫕ ?iv|%cP)fg7-2/| \՟ 7bT5anF˝.]~`/pq(L1M|!{K;mi)j`=Џ\JR*d/raĥRs=܂mSHT)7Ӵ[$\j'ScruЩ793bv,I"IUg:.%FX#NT`q+{BԼSwLb:F1,`cʨ\P;хMTnHƔ)2ꂨ2),lE(`{PpNl=wɿvODٮ_ݴTIxs$E.~|Ǡb*[pb 2p-KȑNܼ߫b<tom3<.Xvbw|o8_ZW tRSRFb*| .sN;s~DS_͌9;mw;Agő'pj5 NBB}Ʀ-5gA̓Ɖ挛oVd$gW(U=*2֖کR~NR3vbpDX;ҏ#Yp#?{3Wr:}%UJ:qNyw[- ~~+|zt7DK\7mZY?+yGNX+ٮ|`11ܺ/=rb{)2?Dk !9D`J(J}nhvk t~ ]r6ts[\$0i=Cv[S`UQtgּ nm 3h S'½|' Gy2tDz:@^Ot!׼\s<4؇^J {)  J"{2IUcU* +`$NI=-I nQ TsU9q[TPYu/RORE8T .{ITzZP3]JT>_Kf=@ie)BurυPUP6n P)p[*l**/5P)nQ*= @Ԓ:xݢJ W@A&0!wABop雡uͩ,Ϛ{"/wofY!hvݼ8$'1&̜+m>!e/(_Xl?'sMc@?>~u \jUw`&27]d .罠 R h1٫@R"3(Jӌ'DB[`1_MV,REĪ4B3E4AM:D8PS}DGX"')QC"Khh{ֱ]I_qkE25K=-K |;`5dO6QOn8$p0sWkW #A3I3Ajc@3 C,Ph5@9/t%vJݮ)Fqg#R_vJLgښ6r_Qil$w7.T6+ehF[}IEe3#S"@h4Ao* ʯIo*ι_dқ (8Sj_%SNiQz*x7 diCCzPݧR Œds~c۲vh>6z.<'h`3h1(_?f'fjv w֡\]9ĸhѱb%*[+^d,\vPur&W?. `UsnrT!ᚬPsőEoh) &՜-0 T$б1y`TKj/-{0F #DzG5rr}'y L8[ƙy[EE VL8voӽr5O+ݪư<(sKL@f:mhmzfCn͐,GcVɪ ;]zd y1Wrv-_BOllknLaPC[67 sD} rg풻5w\MpXaү՞WfL5 z)G䅨ƆY nbNzIO\VFB[Ndu?8EM'?.w~vwhyېj˜µGnl/Rھ#-xehn}:i%<-ecdXGUqt rwM՘ڧd]e~#ۑ|*JkY7];uki:֭Gk67־Ӻu!߸)LjsϺY!XQ HiϨ6F)zȞ*>I9$.A9I;ΐ4c) Miff)e刬ݧ寸Wˉ{nhU* o/M~7k%wL޵'wy "BɩDsm 3zI%#jBAN)NԦ E8I5hAӉ+k)btЛr4l9m9<68JX+.mШp<u⒒/%Ψ30%]U,"֊@s~@{Lm DnBCqmSX޷nR º偏}GvK: fͺ/n]h7t #}fP偏}GvS nǶn;<[hIN 疨~iP}~uZ >ZL}GӨ' _mӨQ_@-*[j+eF1ZUԣݠ;!qF#\TM91aw,L4U, tl je.KfOj3W;;PcSCdL` Vrt#He1ߕ=3kQ*MRN#g/>zPhzR{O@vr_sdCL[;z,кSDީs[;$*&k$4C$@rzHqc%"]xy nl h`Ҍ xyjKǁc}>+UF}OS=/QM,*Ch~yjdPcm_O2NPA>n<*z<ãq9y+0}BiiF@"}Q! Gͩ6|}m<ܜU`)ek>ھ^mzj:0.aS2/Fw #{yj\q_Vp KO2 m>xż׫~s"۵;oH.X;*heyyi]5]ӇJmH)P G~Յ(Я@FT"m{&QrCOж,]sL7n|[ ۰]w+OZ~Rj`5VKz)Sȫs&>xVl6RИS:}܅g_`W~C-zݕp.ӹdT"\p.3,xvYc"VeThEhO-.k lОZ.k1?AVH l_SNe/k1b=5gpjӣ7A>ՂI3}y: ً0>|ż<{Gz^5*Cr7\f.zGbqO|8ヰ}Sx֤x=oǥfeES>.=4>^li<ai};kj7g47^!dccZVAvM :֪zT+4ؠBNac0P!9fRۣg|ð}#}T~yfRHdR 7g&%?lȳ՛Vobi%i!~}Ul5pک{yj#ϙ>uHA>j䌏>\'7.b_,\ŲJ>-֫/SzPj+^ٟP/B$Sǎ;DRtt"±%~R VHQ(٢G8#`ΊdkprrχvIܸplc;NMdyTb: Zu׻|yD.&Vkdhym :by0aݸ_"₥~چld3/T>nFG&,Ap;Jfl8˛'N^m[mw7[,_ޮD^rbwJ)*h&$J+ ,x_3}E53p/y^ OPQ` ӃuzND4^Dg/= gZu,@Sp])+Yo}AK>TFwQOtm1rp ~?.o'wV)x@; Bt7s\ ;JQб 2O%};,8>: :5*,4:yJ$%8%LX$f€K1P uHcs3AĵӋ\AX2ôTc D+mV/d6 c%dǘ#0=,Hdeu}ն{ <]UX\hg!X Q`iC‹B??1,Y2݊xV+ m}H@X X4 P&<H\$a&aD,eHhi%.7 (7 [?h-A>wkqVR|%?FC-ugMP}MO6ݮmjUpMxq3P}IUdڎL2sT]b; =JYB(@iPFOHdZƴDRCe;pj~ ;}vY3 38 W{2+)*~%wϹf&Ћχ0y¤3)8x>GGwnY__'?|G\?O;n1}qa zSJ{v;3 #.`?jQ0 (ϣBqH\Hnf(ׂ>: Ct OU#j| ų[{r[,۝V 0OnOy 0Or (ŠhB~Z. k6 R6h+*LU${v q+e)ތ|8Mfa:s^ J׽.~tyq9/3/b7)[(wrv }=Ԡ,ws~$&5^5 %q9K,F߳}.rNjW.dJmnU1(wERU\ꐐW.dJn2}FVѩ"Q n mxj:$䕋hiN_!Ub9KJ2mT$v>g'>ϥSzo6p:nyߛ?;*w,^_\.ǣrKu^tzs0 DC;ߕ:aD%38?ρtj[5/UyDAb'˖|-se_ۂLL)>](&ΒcD r8C5ƙgCmߟFa{R~ 5)YۢVh',`<`I'48LI$!/ \$\g?Ybl`ש?ЋERإ3WŨ53$4xZ y sVub@BCmJ:Q7Ixl+f--K^16krF qwA3OzـCI= |jD( *iS@Bi1I Hc0V4 2:6Iyoܪ('jqe ^=P_Ý*R *HGaCY$@0ôAȨ-H;S'b MεcdBZ$L"4N<6JRhdHH JXMz2HjʦiZ8mApUj.=J;1MJB:ѓAiG!u1Q&UD$!,0" d}ahAEJ?_>n)X>u*PxL8Jjfj# .wGW>8{(d*P~-!){[[QຉbV!5`)5 ~3)HAEcN0A~kށzzɩǨnSm(Swr*<06W{D \ee/p%BXoSe-rhtNxcUϰA^| ℐl0Ym0Ur!LsΊ_Rʛy ne1ns'/FY^i{F7dR9幣E^_Rq$Oj/&ڨsU`1pqwu˺A'DsTW- Ζ6Q= *H=;g(fbJ,ĥ_d9NwϽ_^Xo)OBlyl0F[8s2gEę&gHtrz>ܡZVTy照GX.9w^՜8a'uX#^k.[ }B PrujI ^z5MaͱGM;;we0̍]o:-g7].닟YuWhe昺+#(}osD8bd9w`i :Wެ;m:[`n:i:wF|gnV#U0L҂g y% q.K8@kVc3JL7k~3 XFoy0p1۳Nf6#YYz jW02&W{OV;I 5޸nO!oַ%ϊfMC׹5ˋ e^l2`.ZfO](WT r E˧ۛ݌5 &9{|jgsgoƱ_Ou_J(*##Akja}!eB[~r_>oI~A'0Єض `ؖ 8W9evf&M8x/\~NR<ӓuH=),:X#xӀc@Eɱ ŧ=UU'{4˝}Þ0aWݼ4ݩ%S?%91: t[61'A Sc ؼN#>-BIsciDm#DB"n&BhrP;>VS^̀%HPݓ"@X FPYˈ$xZ'(n  o&}vgCڃۍY9^̬GhR>}fC<|~%`;w}w}d@?#㛋^2+6.~J [ŏߙ|Lp#pm8#Q۽(%ܸO E/=G5{* |p< j- zRLu-/Gwm(0O`@RErYla[ST`zFK&oGPanG|ܵmo.Wr2|ϰJ5?-4[B+v  ƒ6LkWDrag{?(cY!i3|my2~h}v`4 KYs"_DCrp9c]R/G\_qm>QoؖgPBJE1m_Yfgr߆EߪVvRHpO$M^j[\!`cO7$ߢaoH|k*Mg8H֋fvx3 DIv4 Lpp`wcyk!$v&4DBtKY?T&%.:}ۮ3<3$-8g3$-9Q]́TPz\qqI+YǴ4*L1yu>ʝ*U4w?Go`j5_ߚ남Jט3.5/bks\H$g1mxS|v?%r.i=o5xd&rR 9g381:{77gRYӖD!}+fsUd%g!};frISG5ITlTT/F&fMSs$ {KC79m_p60eEGS/gCWN@uR~܁Td-B57;,4 "Uӓ#(*})LTO_~r=x Qe3k/=?925EPsh&&GF 6ӟ؏5Q OŠlrIM"PƏfyQΣ^ɷ5,ݎoq`p :ATwTc" H)uY4CV}kےn_[׽Q\b;Dd0j)0gM)*3jMTr%HZr",%aD)zj)Ij0wׇ!QqڄXاV%"+܅B_|NCk{vS8 uR 9 5?s]J4Ҳ:ve3Q"+ uͧw"S3+ݸb; 9BFTKAG 2+ePӭm'L=ֻBAR N^L JB$f5H}Ne{ eg 8nui!t_%8Qq%1D=xlʣ9sj/霻bAP#M>!ڵ u߆!z9yE^n~-,~ >a>K j(64eZ #E0R9&ZdXd9Qx,%fَ؈G|Z+¡{z{tkB0E%9JR!u yٵ*(dz~0{&ܼJ%41ܖ wG.ܡKlii) 3ɞw<7LjB3"z:Ž="'ވT8ϧl` DPPAaU^Q 7t{~}J/Du㞖##;_Br,b`ꝅcC;poI:oC}5oL#<DW! n}nk_>|_A(yo7yt~}k^tjX|}N8.A2D]c eYnK @r4ez;N{s)<-YQCV"v\<<%KEI,ch"`šDX%e,RV;Z>XRc:b-hpjQn-TD-=rq:YDeS)d.o @ONRPukבkEUׂTp^Cctǵ+RA4 L) FU ^)UDz| DLBш+YT'#,:UjhXU|VL39P?[>D)wDcQdtvo:oLEytrBv0񡔒y6S-lB4Ӝ- vxը9@ QqՔ{X@,`JPT}5x5E(Q&]!oeHGh|ʘ[P_VTsL~Ҙ< 1RӊWPܖq]?_xJ {#t3XW8~NjS1H~8ǹ Byhu4Kns%<:/c$&yk17ɞw<;QyO;qrB5a7w!{ɼAY! Ob50q~ da]tq"T ?8udgv =Y^PS'YA"eǮ;`qe"e99/p0x+!R頬VRЊjQe,҆1 QSm5% W*"e;sLۜ AhrV!*,hDQ?ƒJ,sꇒKk5JCH,Ch.95ڬ/6lb*٘2O#($k$pᚔ DfڟH^0%01[QQhȩNc?Q(漫84A`F )gN+qE%Ңx!3mu{U^C\c÷2Z }i 0]e:@TwٟZ?w獮4@^$Mw%*CPR.(R'#]B*R#k9wUaE}#G* : XCk9b 4Uݻ&ٖjp}VŢF]TNjܭoba=l*GuS[Aq%@D*`*mH4Ve?]j$_ZW`iD:;jҬnfq.+½;޳4/ꜘeq}Z/&m+-M>\^Gz%+r=|'L;{'+<83HrǫG=Qdhn6gZN%6X-N{}bS6iA+M{}[i/;>\E-opNW{.ﭟ` TAQߋ_K3/m=79L"AsE(ZUE"toLGEBPճ^GC6۝_q(hcS[ápá8JouPK=}):xh[ruaɞxG~}~F[3/-q?U begt= 8׫{i~>bU]|HUOSwtk]Ob{v {u'7v4X7HYEd8{M#4fO( u#FD3ҭctCsS|ސw u#Fq4z2ֿ"P !9ni*D״OX~+瞧;^7.|(25ʏcxq,/nͫ/q䞿ܼLdRP-VX&o~rMۃW;Fl 1_[o$S/Xy{Dft$9~ W[0" ;)QkpVvt'];3]Mz@sVrѕw\uA:p+㓯J~BӿV]k!-/./6ch], .(l)J͢2jSJdPAswI#Dp$06 lgۆH98d-",3y)rZYAU12*3`QjJV*ʝeT4 $j)%q$Ot$ hLB0e̠XP(iY Yބ@bJJAZDL$VZ[cq]D- %D Q8}KAn}Ѓ!S*uJBM\ D\IYPUmhxh&OIT8ewT)']ET?j w- 3"DhVFuR˝TH5N~{mohrh?&EC4ỶVeDKR,*~۸kH!HCI@7L0Q/>ѯ8Vr"0FD3qth8v8ߩT lc.GΩ"cZmPػ:p RIn4Z̹dla!GXnl.P,=w>S6)oA}8;E'tT ?`2ETnABYB)+%T!ڝVF <3 3m;rnf `:v冢veT=n(J+:~ C * ?FӔ3[9*(,䄦N) URrYJ{"? {T\UJOWqzmPkY/L *Fv^ӣ=Nϕ1uqgconL*I2ɍc]~jqnGߣٹbu]ݦ=ŢNxo[sK0%{ڦ-qDUt﬍PO+_:X1${ȿ~d'uox񌀎@({4 2RC0 d9ҚΕ7R!u4-Z@ -Q?abAnT5gU;}GcષځZ5|>gD^oЦvV9 Ɍ5 y`A E:$LPkuNX@)dAnV=icQ ;_k4?x" MPGAGT"v>E&&s Q{lwBfvBNzxo Lx⡁ A(Jbm]Rd0`\XXM9^Ou1Ȥ8"s'R3}gLG#9ZۙڙΌR`#,Se_<]Hq_GA VwjG|@TP1hײH|LuAJuZyCnuҭH ɗ8~Dƴ<]Z?.1$bۄK&E")hV Qݼc7] Y=:?϶&R 8jI(>xYŎGZȣ+m!޾h vPK8QnƼ|γ"Rxmxi󥥇/µd)Ҩ=5n{61dcisń뮧Xw%oWL뮼pj.h_ eȍI0$b7 mzS1!Z+ͧTyNq=nEf$TS6jȞ ^Lt,Y0ɖr6EGdQ TEb_Dja(Ђ2U;a'sgxjy,_]/H^kb %vl>-Ἕvy!~#gKe Wxݓ [Es&;zmKO,M0f72n6(fE-Gg8(B hLV΅E,J:',%K'mu() }Q% )o V(2REl}ƹֺan=GA6  `rQ/QJJ2F*,'zD&( J Q?W1Bɑ܊aV&jJoO+Cfw}UeN6pd|7GQ}gUze$.ška .R:wu?؀=09sxk~e|⁢'>ŀ$Zm/c_[ݽXucI#V-8_ofq۩W~ԡ/ՙ=KH_oL_iw+H la00!3X0%򙐃ow1#z:e!0 MOC}=.Wf&Q/&Q|l}B+6zX*+➏_o ?R/zРot6s$$Gc'. iGK N: q_~dYK-F}G;-TI]![OVBcX|%j[z$Ȥ%H\TX74fsXUMVp=EB(X4ّKՁ$Si3{_Sbzbib wubm 5E~rI9$A“E]t0;ZH{_hhv?uD灃- Jr8-=rePhyaSWǾ/_wBtbюۯn7A.FaY@ZUS~`r #IlLٌżXN}vr43G%{8F'fQ&:ΝFqw 5ZƾБ,YYH3PDtd&zܗ6dghy z(d4: z1i13%R-4]sS RwL#<3RMx_\ܗRZ#.턌 =,dTH΂uGg_ $xOi):I 9hdqXZ+ SpsRk!|'i14W5@&PyX)cP -p]$6Xw\ȐC ĻC3-[Wi<]V.on'$%d,%_%dRB$ibFjw/U+n6)UiS\I6/guR}[24X̑MTHq $~bO\O'mٵ]TY@e.@bA272.Q errKfzK~ݨ{;!R`$ =e=\S\+h<0sb^9z2b.gxoOP)[ڍsF AT&YYI[tD"+ yflمv*(Di&7y}TY',>Srnz>UFӅOՎaNEͰ[@q6~4uGLhu ehr "1G8V/%%lDnEF'j;2 Q<Mv<* lhR()Jәl4g~jԎ4/Q{>j}-kK;I)X<˛tS%-/빯D Ξz'R8c}!ie3E(-pt_\\y2\O9 dנFv4%&ap\YFVgq]ϊ h/aDXq*9)y2CA&iXc(T IFť$('.f&3xpA:~'WO+4@f2w[f`ؾ-l1 Y*AI>Yt/Ū?^xRKv|\L!=46\w򂲎n~\ ~.S`˼LmuHesC1pk, 1B@F[v2AH -)"nY("ِPƍ]0N^%y_&FSݑJRBI!x*W2#ULɅ{6ܴGJ<<׏v|ZGA;ޅɣJWj"+5{BFNqY1Y#N'f֎'4^SbD<;ⲜQiWc%GpCmZ}}?-lT\:W^)23{RI qP% N'mI-rͷ?$@ -&(PH LXbT`%r3`\9ho\Ofwzw~(x)aV7<'Aʿ\ۆ>u ouK\4 y!LWp>gVtyfΐr22>ݳzCe qo`db;1Ԉ NYL'6Jo\{%?EJ~cA> Y*բXf1G#CG߰0^evB-)g4J:bE}:LtH5T0 6]tn! J>)[!W{jS..ݤIY d_F|!3:L8Uʈ%w1i;0cRFE10@JKi8^" pM΂;jG(Uۻ/Ou1%J3Adд̪bZ;Ҩ-UǸְ;R]Ƭn1Is- !8Y5n ;rAբ$.#<d56$p`Otw9ҲÁՒB3wܚbhl=Ihb Jv7[{ F,C0IIBP"T-hM&xWTo 0WeF 2 ~rN^ÜX2\đ$NMoȏl7\AVͮ(,x{OA M\GC=etj=Fa/nu&<q4e4ozi7sK!6@ 3$"([zkzj=(k+u@NJپa}aQ)P  H0bA@έ/׳WʃY{ [;(5B*؟Qyt̴!9y!.;kE֙dXDxGڰRa3D'հ~5k^8=g'6J:JvnZa:DW ~<듯fgSK fR$:CBJSlY|5k[|^mm}oW+{9ઙ |BÔ9ݷi'gmyUD-mC)][X7U+6[%iN<% ~19̉RmSHVqa:/qfT$"J#] `_3㼷k- -Mb$o9EJ6tmxɐ?ͧ>wĢWP\pԃkHFf? G^VU~O/VIS4L~бR*at$9q%@0^ws] Ny)\5)#o\4ܛ9~yE"]TF2R w=)%$ f1Q/l4֊Q7MI-$RWNS*l!Jlhk<E'B{3(Wdm )>|ag n$[dZ&Y ]&k9۹?q3z%HDryyv+ /(BZቧ)Jځ6vCeޜ629i5SЗU1 B10@ ZҴ_3cL3*p49U &ڐ^&QEbx7d԰"--ކI+Y6*=+/1DEO 'xȺox*+XR=pp559ŒQes(KKFIȴX >܉sŭ¿[k#|R(62:}h`9*PL3>%tT f>K7.QE D!'^$u.K=2k! 5Z <=C7}.4==7d.&Kr/s7<[ S0%SBno{HcBl$"T9Cv9N`7CXZ ) J)aJ1A6 BNl}nk%D9• z7;—J>.O𡮴MLX/WfPxfJ@!!&z|ڟaJU9aҎi۵+u}2"0*`5C h"ABmȅpmvHܸg& XGF<\TF] T &P8m@sn[k y]ߘz`44 V9~Ry](`?k'k%>=SrMA>0g-i9{gJ2',"?WWn. /ҡhg Z[,.eU׺[FPpJ;xA_`Tt vX/ 2Ǥ"Xn 9Dܖ|Pk+@>>g}ro֠i_5_?| #vlsYܟ359"L2vE+<_ɑ7W|:b#P/O|;"f1i>דseV'j7 +Y "y~wvq\Vcx~k͉ x]-_߶9t3k%8jeAqI#'@|<^Iڡ̟~J-; W˞ a qgG)?- ̶2\Rk^饥(_vƣƲ4XO f]ƶ߿;Ꮣ^˝b~ˮrT!X\Yxt)y/NaƟl242L3%z hY  A(-cl$UyMb7Ϝxv*bί bJ0=LБkaH,VDb]4;Վ%FvGٵ/ldw9U{ޚy7Ψ  մt鯞*{"]KKo#]~r|ZNy/l9ďӼs#GrM('u?~<2ͯn1;h@Ahյowo4]grFP?_;]&ՀguWb4 xjg$;,36~k<5׎ Uk(ˮgҘHacaBv=9%mL!ÿo2kd1{{e?}ncc3S6/w6P(!Y.jGz>|uNJE35̹z YkYtDž_n,_Ntpw#b4ߩaNREEKSs~(mr7^od[d[eX{3oN6 $C9/i07 sRw]SI:ZowZ1$bѵGJWjv.u6i IH, &,[r?/Gs'ZR @<]WUFWmhWtmwx߮tn8V?jO?X;qKHC%)R*:#pnpM#q话r6g$7TiU9gzJ{v.ޟ%~;oM]!qyveo1[0M"xdƖVdlXGYhHD\AFmo`ks̶o5mlIBkwiiֵv^Fڱ'v9+ɳ i)eT1_-y†;'҂X@:J9sr%4?RWwUxUӪ'o.~b3<4R{% ܏3kknc7EuR qŵKv+ͩdX``$u.NoI͐Zt!iro462SEq@)Fsü)0BUS;ƾYUx*]ft[\e+NS`~PK#%jҘFӽ ^lQ VkJ@fwwW=yrOc=k,"z(ی+W0"Ar|js!nw9xY^|Jشڥ?zi+*PJ v`ۤ>_~v>Z0UXH%;⩯T엋j" h^Uo.S{Wb-B-;]"L`׆ʣOITAIʘ^VUej`SO8Ԉ FrE}?kTX,qR*XF>E2i渫*&3p,-'wq.~/g3ʖ`ؕwIUT%^Jz`L:1qѵ;(2,)L$:ff6`JI# m9qOqf&O*W5<:?qRI&0Ei+ O:#;>nZh"Jcz1#t>~LJ* jM(!h6{dSN`+ k;ܮne"ópӧƟAcSbuoH,_[[zv5_S~? '3ѯߝ=e$Mg>.4_ޜY~廅p(b 5/{+Xa ֤^vLyuuټ_I<=dCvQ|0뀍D"W HL,R'-H }MAZ7ivkR+8s"t-D;R)\5qVK4kLeE!%zU6&P]pUHY]x Meآa^24YUUU8 ! 5ha 9HYwSz:L$Ӗ)^NH鍐xNyYi֊FIt/GHk'N7.JAOg/ .w ^YwhKl+Eɭ4{uZKu! J>VC .vM1iƾ_ 5z*}s9% 9oH_'S4#ފaHRY[.6r- qra-4 w&lL&&LҹVIOJk ^*D{)wy6Xj]9GV˄4OZZBIbZOwBA&{N6jE$4d?].V}F|%IZ{O/3eR˟J6Нbr-9vhdGWOAtb4&6:$ ԎP֎%k+k`J/,3ub0=/ZA˕޸Iظo7GPWJ*]L`vf=v i;3Zb ̊7^d3?rROI.׻:,42,[.ZߍxszGg?]Y~? }^]7x O6sR!cO{\96 t|'t2Bfu1sP,ƶj^Ú(1ܽJZpW63ENql2)4#=sU+nP3f4).kY JI@ev`Jf M?RƸ]+X-:ޗU;Cf5m=SK(űgiKkJ{#gk+)t{Ec9s(@MDJXȁ|+Mar?ղI*:7~Ԏ@ϽU磷f׏|zn~?;K볇Ґ,EM ]/n9<+O"WyNL,%w?~xZϾ,>zYw->v6̔O>%F[UI(\/`~Okts- j()[usUi"䍇h|BnN{tjz6v$3Э{CBt#Bxv)=z 4|%mpKF ̮ѭG}}Lfv;͵tbXy@9ɼyj^iS`9sP뷢,䰕i{Y)vY3(UsYQR!sT#A8bPU[˕ETʸ?YWuSu~ų\HlXb˾tJ0~P& R nV6`JIqIzr\Vz:wǏJ*!pRRO|Ď7Ҙ^'0@M 0r˼p WR_V G4lV)DɤI}&m0ޞcp;eƥDR yY" dFCbbɺu¾>}6D9.'=]aTE%s+R7NwG/TZA`6W,Oط쓅3j~-}pF<5OwاxY*Wq`طMU')(*Xwjا0+uOk>eQ 3mY ;a^taT-*bPA`¨ZTĀϾEv5+m|Ej֝ȏ0JbW !.ºEXM (B5Ț]0=) +?H퀟j\} G9+;7rʩ6_11ouͿ.{ Q(+2BWWU>Y'NhXh(\q5Vh*Du,@5 E5ωy2hpJj]].d y*o"i mb|w[_-TH6~x`y_֊PP1YS( )0Ԉ>] VIn β#fnm\L@Q*Qe`8BCpWqj{J,bѫ%5uq?<3]\\|J>Qdom[PڍܮbZjmbV}?Y/6# -&K gsE]]/~k?ɵ^`E}Lɡ0R[}J"vD`f6ʑ5 {lcQ0(DO̡n˟.WW,*gti8f\]kMۻFrcW?%'*^ a1HyIgk[$g198[jlv#.V},b}ib@O@deFf~XX'q#LB3J%fg},v!+p}tE`8r&8 VDUAQQk2uPWJ_0;A*~uw߂0ĖӦ<ȇ4P{#Vٖ]gb/A8zw%󠯟p͆M٠Cybƒʢ Tp/AKBVVq)w{@o^V4`'?߁Y3?7W5GRk8hE? ףaFDOɵy}yИi*iN Le²>Va[k>V 3a7w&ޙH U*FHUa5RR3t,40}03Z)H蘱Ēi"%"7&*ܤ@LaL%\*ia*Fav8 Lb,H6dyo\hq  Pa!p.BUxFGxFUB&?O%C *:NVa'ƭWr2/&2N"y6~ǩzJ &ԃz ~|\Oy',^ϨC^.}ۇݶ\th'1Pl_ËOlT2G-\^!Z/v|mK2lU4?$drcG7C#oE_l7%"xgI7t7.[;E2M)HVbJE-dbv5v550,P)vpk7d=x%B5(r/ NDKhn67&K'j%ԁlpraѶQ@9/ZE3Z%~kN==p݋t??vWc<g51.18Ic/dܥ6EnI K 36I& )I +KQ+yVgm= KEO1:4}:)_HLgZsȭb0rM4aX#BLEl" +/3e)V$.-׻&?bHClA*~jzab7) \ t!DR'iJ81$d9fe&Y ΍fځ +hf\엣؎T % o@^-mFPƝ*R8riBsTM]9B@.UaUJ"όQfȶ6e>g;#< \_<=Koz_hB̴fFj-ccg1˲ WǫxswqЦLL!L>|b}Y;~eOߞ\fKgjf9EgdzLfFv;kl2h[jwA2B(mfdGELY> 77.!8 )+y$c_&50ԧږfzC Zh(mUVxYM꽲}dQ^ k=AmJ6՛c- nźG?. 6xТXk@\_`$o3(de$,zx^Ud!yZ xmz"W^);EƢX'(߀#rGjDS^+G5qaud#?{m7癋cA NW }9U+T$)?0y<-C|($ZFU=3nW{sGzU]έ񏶦aueXKCnrꌏ_ܪb!ZOy3zǻ7r-RAЉ|G+v˂f8z@h}ݰR-RAЉ|G+vUq8}&Ի7l 4{&cz~!&2̂8C0[`1̵{Iu*;lRu9T='eMjՍ@fMbCU)Օb'T * blƙ"w^ezIW+uͻ$&/տO_?{/dvq')yjuwz?~n5|t_d2luW$K)AHEP@>w]~cٽWW_=(Qeϙ P:F1 Dfj ʩULeKfNeIFzd!LSE`@KGF10+6\ Fy`O(ǀ祾KIOU܄ok(PMxƫ UJ0T p]IB=|!_K% aO!ߡB>~.e-dIc(JFeIcx֍ p&$}iq4P۫J`/g {YO"N&i~]ٍcNE~ioza)(ݻkTl4#mI41I]ʱLjjO_c;Tm>䔘uDDS@1n4i:MAStR4L}ܚy%+;j 4UiC~_A&`. DqM[c#x3b|DdEoqz! בI@X,=ΛJV6A7Gqdb X5td0#Q}P3NĞҷn RWaCB\'*\. RmmJ H%DZX%y"wb9'wR÷Kz@h+1uRu^X'"\lr<55В/L1D oIەhY ԡrMƜ4qBfVIurn,SΥ -p:瘦nЉ/E]]R&x>ɶsvλܿ~FA{Eti"y}h:^>]kflk OrQp2V*r*ad(c z IRHV^(Ox6 -GZ\_8z􈃡Ԗ`$^ [ζ@ZWRH'i;_ADzP j?%r My,J5htgcSRʔֆXbµa)\!KRZ Z+'+.&"j;E0 ,-b38\…rMqN) [iVK\=dR^NJYKXHʘM(M$A$\6 m`_{g7uA_֡X`2qRװeC*EF^Vx,:@gv,]j-Iͣ%u j F:a}GBdS.`F1P Qy̷ݐ 6ľw;b4K=wOn} C4SB#]+)DGn:N;xc"H̻_Zn} C))=C4J(ö# iR2q(C祾KM8 *dDI# 8P%dJ'aQ%MdLDMU҄K U }D-㨺ԗ5'T *y΂ss%y/kR (O,jâ0T[q0Tr!N|J*Y|{c@ UrM}(cAvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006220534515145173131017705 0ustar rootrootFeb 17 20:08:43 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 20:08:43 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:43 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 20:08:44 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 20:08:45 crc kubenswrapper[4793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 20:08:45 crc kubenswrapper[4793]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 20:08:45 crc kubenswrapper[4793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 20:08:45 crc kubenswrapper[4793]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 20:08:45 crc kubenswrapper[4793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 20:08:45 crc kubenswrapper[4793]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.245477 4793 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249595 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249622 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249628 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249633 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249638 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249642 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249647 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249651 4793 feature_gate.go:330] unrecognized feature gate: Example Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249657 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249664 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249669 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249675 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249681 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249702 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249706 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249712 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249718 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249724 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249728 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249732 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249737 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249757 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249763 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249767 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249771 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249774 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249778 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249782 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249786 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249791 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249795 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249798 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249803 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249808 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249812 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249816 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249821 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249825 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249830 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249834 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249838 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249842 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249848 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249852 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249857 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249863 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249868 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249874 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249879 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249885 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249890 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249895 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249899 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249903 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249907 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249912 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249916 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249920 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249924 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249928 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249932 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249936 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249941 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249945 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249949 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249954 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249957 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249961 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249964 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249969 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.249973 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250873 4793 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250894 4793 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250903 4793 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250910 4793 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250916 4793 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250921 4793 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250928 4793 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250934 4793 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250940 4793 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250946 4793 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250952 4793 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250957 4793 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250962 4793 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250967 4793 flags.go:64] FLAG: --cgroup-root="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250971 4793 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250976 4793 flags.go:64] FLAG: --client-ca-file="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250981 4793 flags.go:64] FLAG: --cloud-config="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250985 4793 flags.go:64] FLAG: --cloud-provider="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.250990 4793 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251324 4793 flags.go:64] FLAG: --cluster-domain="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251330 4793 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251335 4793 flags.go:64] FLAG: --config-dir="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251340 4793 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251345 4793 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251353 4793 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251358 4793 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251363 4793 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251368 4793 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251373 4793 flags.go:64] FLAG: --contention-profiling="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251379 4793 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251384 4793 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251391 4793 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251396 4793 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251404 4793 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251410 4793 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251416 4793 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251427 4793 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251433 4793 flags.go:64] FLAG: --enable-server="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251439 4793 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251448 4793 flags.go:64] FLAG: --event-burst="100" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251455 4793 flags.go:64] FLAG: --event-qps="50" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251461 4793 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251472 4793 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251478 4793 flags.go:64] FLAG: --eviction-hard="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251485 4793 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251491 4793 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251495 4793 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251501 4793 flags.go:64] FLAG: --eviction-soft="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251506 4793 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251512 4793 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251518 4793 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251523 4793 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251529 4793 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251535 4793 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251540 4793 flags.go:64] FLAG: --feature-gates="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251548 4793 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251554 4793 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251560 4793 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251565 4793 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251572 4793 flags.go:64] FLAG: --healthz-port="10248" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251577 4793 flags.go:64] FLAG: --help="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251583 4793 flags.go:64] FLAG: --hostname-override="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251588 4793 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251593 4793 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251599 4793 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251605 4793 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251610 4793 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251616 4793 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251664 4793 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251670 4793 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251675 4793 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251681 4793 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251703 4793 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251711 4793 flags.go:64] FLAG: --kube-reserved="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251718 4793 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251723 4793 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251729 4793 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251734 4793 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251740 4793 flags.go:64] FLAG: --lock-file="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251745 4793 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251750 4793 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251755 4793 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251765 4793 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251770 4793 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251775 4793 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251781 4793 flags.go:64] FLAG: --logging-format="text" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251786 4793 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251793 4793 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251799 4793 flags.go:64] FLAG: --manifest-url="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251804 4793 flags.go:64] FLAG: --manifest-url-header="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251811 4793 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251817 4793 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251824 4793 flags.go:64] FLAG: --max-pods="110" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251830 4793 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251835 4793 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251840 4793 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251846 4793 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251851 4793 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251856 4793 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251862 4793 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251881 4793 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251886 4793 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251892 4793 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251897 4793 flags.go:64] FLAG: --pod-cidr="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251902 4793 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251914 4793 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251920 4793 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251926 4793 flags.go:64] FLAG: --pods-per-core="0" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251932 4793 flags.go:64] FLAG: --port="10250" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251941 4793 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251946 4793 flags.go:64] FLAG: --provider-id="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251951 4793 flags.go:64] FLAG: --qos-reserved="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251957 4793 flags.go:64] FLAG: --read-only-port="10255" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251962 4793 flags.go:64] FLAG: --register-node="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251968 4793 flags.go:64] FLAG: --register-schedulable="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251973 4793 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251982 4793 flags.go:64] FLAG: --registry-burst="10" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251987 4793 flags.go:64] FLAG: --registry-qps="5" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251993 4793 flags.go:64] FLAG: --reserved-cpus="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.251998 4793 flags.go:64] FLAG: --reserved-memory="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252005 4793 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252010 4793 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252015 4793 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252021 4793 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252025 4793 flags.go:64] FLAG: --runonce="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252030 4793 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252036 4793 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252041 4793 flags.go:64] FLAG: --seccomp-default="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252047 4793 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252052 4793 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252057 4793 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252063 4793 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252072 4793 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252078 4793 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252083 4793 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252088 4793 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252093 4793 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252098 4793 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252103 4793 flags.go:64] FLAG: --system-cgroups="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252109 4793 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252118 4793 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252124 4793 flags.go:64] FLAG: --tls-cert-file="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252129 4793 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252141 4793 flags.go:64] FLAG: --tls-min-version="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252146 4793 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252152 4793 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252157 4793 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252163 4793 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252168 4793 flags.go:64] FLAG: --v="2" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252176 4793 flags.go:64] FLAG: --version="false" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252183 4793 flags.go:64] FLAG: --vmodule="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252190 4793 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252195 4793 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252377 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252385 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252391 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252396 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252402 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252409 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252416 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252421 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252426 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252430 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252434 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252444 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252448 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252451 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252455 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252458 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252462 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252466 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252472 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252476 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252480 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252483 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252487 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252490 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252494 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252498 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252501 4793 feature_gate.go:330] unrecognized feature gate: Example Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252505 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252510 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252513 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252517 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252520 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252524 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252527 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252531 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252535 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252538 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252542 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252545 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252549 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252552 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252556 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252560 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252566 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252571 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252575 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252579 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252583 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252588 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252592 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252596 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252601 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252605 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252609 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252613 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252617 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252621 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252625 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252629 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252633 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252636 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252640 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252643 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252647 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252652 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252657 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252661 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252665 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252669 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252674 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.252677 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.252699 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.267874 4793 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.267941 4793 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268077 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268101 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268111 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268121 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268130 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268140 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268148 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268156 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268164 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268172 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268180 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268187 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268196 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268203 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268211 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268219 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268227 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268234 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268243 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268251 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268259 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268266 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268275 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268283 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268292 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268301 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268312 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268323 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268333 4793 feature_gate.go:330] unrecognized feature gate: Example Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268342 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268352 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268363 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268398 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268409 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268422 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268432 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268442 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268452 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268461 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268470 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268479 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268489 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268498 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268507 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268517 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268527 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268537 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268547 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268556 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268566 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268575 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268584 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268594 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268604 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268618 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268630 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268640 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268650 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268663 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268681 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268726 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268737 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268747 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268757 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268766 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268775 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268788 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268802 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268812 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268821 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.268831 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.268846 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269077 4793 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269092 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269101 4793 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269109 4793 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269117 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269126 4793 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269136 4793 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269145 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269153 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269161 4793 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269170 4793 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269178 4793 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269186 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269193 4793 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269201 4793 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269209 4793 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269217 4793 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269225 4793 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269234 4793 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269242 4793 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269250 4793 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269259 4793 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269266 4793 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269274 4793 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269281 4793 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269289 4793 feature_gate.go:330] unrecognized feature gate: Example Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269297 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269304 4793 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269312 4793 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269320 4793 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269330 4793 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269340 4793 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269350 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269363 4793 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269378 4793 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269389 4793 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269401 4793 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269412 4793 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269422 4793 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269432 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269441 4793 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269452 4793 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269462 4793 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269472 4793 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269482 4793 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269491 4793 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269502 4793 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269513 4793 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269522 4793 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269531 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269542 4793 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269552 4793 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269562 4793 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269572 4793 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269582 4793 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269592 4793 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269600 4793 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269608 4793 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269616 4793 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269625 4793 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269635 4793 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269645 4793 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269655 4793 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269665 4793 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269678 4793 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269722 4793 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269732 4793 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269742 4793 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269752 4793 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269764 4793 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.269777 4793 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.269793 4793 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.271090 4793 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.278665 4793 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.278984 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.280673 4793 server.go:997] "Starting client certificate rotation" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.280747 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.281700 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-15 17:26:24.512207779 +0000 UTC Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.281879 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.314307 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.317987 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.320328 4793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.348496 4793 log.go:25] "Validated CRI v1 runtime API" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.385924 4793 log.go:25] "Validated CRI v1 image API" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.388021 4793 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.393329 4793 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-20-03-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.393380 4793 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.422809 4793 manager.go:217] Machine: {Timestamp:2026-02-17 20:08:45.418007825 +0000 UTC m=+0.709706176 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6761f953-4396-4ffc-8ccb-0bad99a4cc8e BootID:9899edd8-40d9-4d30-b914-dcde4645fb8b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f7:6a:2d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f7:6a:2d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b9:6c:26 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7c:f3:47 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:60:0a:b3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:27:58:78 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:66:50:d4:f7:f1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:91:2a:e6:92:23 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.423233 4793 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.423468 4793 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.424943 4793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.425305 4793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.425372 4793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.425756 4793 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.425780 4793 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.426369 4793 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.426432 4793 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.426828 4793 state_mem.go:36] "Initialized new in-memory state store" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.426990 4793 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.431605 4793 kubelet.go:418] "Attempting to sync node with API server" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.431655 4793 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.431746 4793 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.431784 4793 kubelet.go:324] "Adding apiserver pod source" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.432007 4793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.439967 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.440079 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.440380 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.440468 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.440606 4793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.441751 4793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.445508 4793 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.446956 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447057 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447142 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447206 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447311 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447380 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447440 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447512 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447583 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447649 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447761 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.447828 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.450924 4793 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.451659 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.452243 4793 server.go:1280] "Started kubelet" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.453266 4793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.453259 4793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.454221 4793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 20:08:45 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.464277 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895218ec14aeaa9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 20:08:45.452143273 +0000 UTC m=+0.743841614,LastTimestamp:2026-02-17 20:08:45.452143273 +0000 UTC m=+0.743841614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.466407 4793 server.go:460] "Adding debug handlers to kubelet server" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.468462 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.468549 4793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.468786 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:05:18.213181457 +0000 UTC Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.469193 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.469452 4793 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.469525 4793 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.469815 4793 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.469993 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="200ms" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.470887 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.471016 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.474813 4793 factory.go:153] Registering CRI-O factory Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.475001 4793 factory.go:221] Registration of the crio container factory successfully Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.475288 4793 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.475442 4793 factory.go:55] Registering systemd factory Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.475565 4793 factory.go:221] Registration of the systemd container factory successfully Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.475751 4793 factory.go:103] Registering Raw factory Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.475931 4793 manager.go:1196] Started watching for new ooms in manager Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.477972 4793 manager.go:319] Starting recovery of all containers Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479076 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479137 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479150 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479160 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479170 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479180 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479189 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479226 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479237 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479253 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479265 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479277 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479288 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479300 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479310 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479318 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479339 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479348 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479357 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479367 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479406 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479418 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479433 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479448 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479460 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479471 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479489 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479503 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479514 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479525 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479567 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479580 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479592 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479606 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479618 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479649 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479662 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479674 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479752 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479765 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479777 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479790 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479802 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479817 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479829 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479841 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479855 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479901 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479917 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479929 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479944 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479957 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479975 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.479989 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480001 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480014 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480027 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480040 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480053 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480065 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480077 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480091 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480102 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480113 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480125 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480136 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480147 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480159 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480172 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480183 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480193 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480206 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480218 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480231 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480244 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480255 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480267 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480280 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480291 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480302 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480314 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480332 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480344 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480356 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480367 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480378 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480415 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480427 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480437 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480447 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480458 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480505 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480517 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480528 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480549 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480561 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480572 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480584 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480594 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480605 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480616 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480626 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480636 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480676 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480719 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480733 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480744 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480756 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480769 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480781 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480792 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480840 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480855 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480869 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480881 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480896 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480906 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480915 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480926 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480936 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480945 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480956 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480966 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480976 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480988 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.480999 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481010 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481030 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481051 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481065 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481077 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481087 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481097 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481108 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481117 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481128 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481139 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481148 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481160 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481170 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481180 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481190 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481199 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481209 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481221 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481233 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481243 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481254 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481264 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481275 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481284 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481295 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481306 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481316 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481326 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481337 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481347 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481357 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481366 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481375 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.481426 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483776 4793 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483875 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483901 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483920 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483941 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483958 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483977 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.483992 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484010 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484025 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484041 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484055 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484070 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484089 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484105 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484123 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484142 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484159 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484175 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484189 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484204 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484217 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484232 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484262 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484315 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484335 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484352 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484368 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484385 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484402 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484418 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484438 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484455 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484472 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484486 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484502 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484518 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484533 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484548 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484564 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484578 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484592 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484609 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484623 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484637 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484652 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484667 4793 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484681 4793 reconstruct.go:97] "Volume reconstruction finished" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.484709 4793 reconciler.go:26] "Reconciler: start to sync state" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.505350 4793 manager.go:324] Recovery completed Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.517103 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.519279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.519327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.519342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.520581 4793 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.520621 4793 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.520665 4793 state_mem.go:36] "Initialized new in-memory state store" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.535053 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.537306 4793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.537382 4793 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.537444 4793 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.537721 4793 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 20:08:45 crc kubenswrapper[4793]: W0217 20:08:45.538477 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.538549 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.551183 4793 policy_none.go:49] "None policy: Start" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.552509 4793 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.552606 4793 state_mem.go:35] "Initializing new in-memory state store" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.569793 4793 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.612228 4793 manager.go:334] "Starting Device Plugin manager" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.612290 4793 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.612308 4793 server.go:79] "Starting device plugin registration server" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.612937 4793 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.612962 4793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.613111 4793 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.613222 4793 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.613235 4793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.625462 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.638759 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.638867 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.640351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.640389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.640401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.640533 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.640980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641040 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641745 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641867 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641897 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.641967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642668 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642815 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642861 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.642981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.643872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.643912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.644164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.644408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.644498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.644527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.644613 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.646755 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.647020 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649736 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649780 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.649906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.650941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.650975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.650993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.671626 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="400ms" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687443 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687480 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687503 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687558 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687612 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687765 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687829 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687928 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687953 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687981 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.687999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.688017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.713499 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.716068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.716111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.716125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.716157 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.716707 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.788836 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.788898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.788936 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.788966 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.788998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789052 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789066 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789131 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789206 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789264 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789274 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789258 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789308 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789316 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789336 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789344 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789374 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.789397 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.917518 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.918988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.919028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.919045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.919077 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:08:45 crc kubenswrapper[4793]: E0217 20:08:45.919553 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.969205 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.975975 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 20:08:45 crc kubenswrapper[4793]: I0217 20:08:45.995403 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.020785 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3211fc3d072911ba5c188584a91ee972012c68b6f3ddb0f146942a1741a7a6e1 WatchSource:0}: Error finding container 3211fc3d072911ba5c188584a91ee972012c68b6f3ddb0f146942a1741a7a6e1: Status 404 returned error can't find the container with id 3211fc3d072911ba5c188584a91ee972012c68b6f3ddb0f146942a1741a7a6e1 Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.022768 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.025212 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-85f458e570e6230021b793cb08f80ce6e108cf7a48d24fb6d2f732009d52c8bd WatchSource:0}: Error finding container 85f458e570e6230021b793cb08f80ce6e108cf7a48d24fb6d2f732009d52c8bd: Status 404 returned error can't find the container with id 85f458e570e6230021b793cb08f80ce6e108cf7a48d24fb6d2f732009d52c8bd Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.027578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.032877 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ccbf9bbc1730b517727f9cf3462d6576b47b572f3a4ab6ba1f286339cfc0f323 WatchSource:0}: Error finding container ccbf9bbc1730b517727f9cf3462d6576b47b572f3a4ab6ba1f286339cfc0f323: Status 404 returned error can't find the container with id ccbf9bbc1730b517727f9cf3462d6576b47b572f3a4ab6ba1f286339cfc0f323 Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.042074 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6b7366ee0d73846afcba8ed781985f1316770489010773d807043f2be4af44eb WatchSource:0}: Error finding container 6b7366ee0d73846afcba8ed781985f1316770489010773d807043f2be4af44eb: Status 404 returned error can't find the container with id 6b7366ee0d73846afcba8ed781985f1316770489010773d807043f2be4af44eb Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.046393 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-81444bb2d9778044a8336cfa81e693f96fe9ec449d638c19d9b333b250606f74 WatchSource:0}: Error finding container 81444bb2d9778044a8336cfa81e693f96fe9ec449d638c19d9b333b250606f74: Status 404 returned error can't find the container with id 81444bb2d9778044a8336cfa81e693f96fe9ec449d638c19d9b333b250606f74 Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.073355 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="800ms" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.320150 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.322099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.322150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.322179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.322215 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.322933 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.340640 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.340750 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.359725 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.359813 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.452927 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.468551 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.468652 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.469590 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:21:58.870976251 +0000 UTC Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.543100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b7366ee0d73846afcba8ed781985f1316770489010773d807043f2be4af44eb"} Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.544285 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ccbf9bbc1730b517727f9cf3462d6576b47b572f3a4ab6ba1f286339cfc0f323"} Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.547185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85f458e570e6230021b793cb08f80ce6e108cf7a48d24fb6d2f732009d52c8bd"} Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.548797 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3211fc3d072911ba5c188584a91ee972012c68b6f3ddb0f146942a1741a7a6e1"} Feb 17 20:08:46 crc kubenswrapper[4793]: I0217 20:08:46.549920 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"81444bb2d9778044a8336cfa81e693f96fe9ec449d638c19d9b333b250606f74"} Feb 17 20:08:46 crc kubenswrapper[4793]: W0217 20:08:46.802621 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.802732 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:46 crc kubenswrapper[4793]: E0217 20:08:46.875365 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="1.6s" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.123279 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.125557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.125641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.125661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.125735 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:08:47 crc kubenswrapper[4793]: E0217 20:08:47.126433 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.453521 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.469867 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:51:50.729742819 +0000 UTC Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.481118 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 20:08:47 crc kubenswrapper[4793]: E0217 20:08:47.482547 4793 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.556035 4793 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30" exitCode=0 Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.556103 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.556141 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.557491 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.557538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.557555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.559941 4793 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44" exitCode=0 Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.560089 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.560069 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.565717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.565799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.566830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.570067 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.570126 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.570150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.570170 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.570287 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.572031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.572070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.572088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.574916 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022" exitCode=0 Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.575042 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.575162 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.576261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.576298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.576315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.580685 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0"} Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.581035 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.581172 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.580476 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0" exitCode=0 Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.584548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.584608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.584628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.585465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.585527 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:47 crc kubenswrapper[4793]: I0217 20:08:47.585553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:48 crc kubenswrapper[4793]: W0217 20:08:48.229416 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:48 crc kubenswrapper[4793]: E0217 20:08:48.229530 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.452767 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.470061 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:57:27.369907414 +0000 UTC Feb 17 20:08:48 crc kubenswrapper[4793]: E0217 20:08:48.476963 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="3.2s" Feb 17 20:08:48 crc kubenswrapper[4793]: W0217 20:08:48.495102 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:48 crc kubenswrapper[4793]: E0217 20:08:48.495170 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.591783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.591855 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.591877 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.591907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.594163 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9" exitCode=0 Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.594300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.594417 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.595572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.595619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.595637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.596364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0392a7b5708589df63cc28b06b4fa8d5e853138742986663160183821ad654d4"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.596597 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.598732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.598768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.598785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.601295 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.601313 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.601763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.606931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.606954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4"} Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.607955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.607984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.607994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.611304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.611352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.611365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.726892 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.728031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.728073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.728089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:48 crc kubenswrapper[4793]: I0217 20:08:48.728124 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:08:48 crc kubenswrapper[4793]: E0217 20:08:48.729189 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.155:6443: connect: connection refused" node="crc" Feb 17 20:08:49 crc kubenswrapper[4793]: W0217 20:08:49.026401 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:49 crc kubenswrapper[4793]: E0217 20:08:49.026498 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:49 crc kubenswrapper[4793]: W0217 20:08:49.037269 4793 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.155:6443: connect: connection refused Feb 17 20:08:49 crc kubenswrapper[4793]: E0217 20:08:49.037348 4793 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.155:6443: connect: connection refused" logger="UnhandledError" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.470337 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:54:01.490863679 +0000 UTC Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.606236 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b"} Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.606343 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.607374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.607417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.607432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.608560 4793 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01" exitCode=0 Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.608630 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.608663 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.608717 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.608757 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01"} Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609612 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.609789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.742334 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.874395 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.874612 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.877501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.877565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.877591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:49 crc kubenswrapper[4793]: I0217 20:08:49.882886 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.470508 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:38:25.114033008 +0000 UTC Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.616175 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379"} Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.616234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5"} Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.616253 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e"} Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.616265 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b"} Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.616279 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.616358 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.617334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.617401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.617422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.617495 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.617543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.617561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:50 crc kubenswrapper[4793]: I0217 20:08:50.658268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.471468 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:03:06.418794422 +0000 UTC Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.622560 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02"} Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.622655 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.622655 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.623864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.623896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.623905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.623985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.624043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.624065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.870473 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.930198 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.931953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.932003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.932022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:51 crc kubenswrapper[4793]: I0217 20:08:51.932054 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.472771 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:54:09.214314943 +0000 UTC Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.579073 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.579305 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.580993 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.581043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.581062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.625328 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.625396 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.626517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.626557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.626571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.626518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.626654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:52 crc kubenswrapper[4793]: I0217 20:08:52.626666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.267456 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.267682 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.269139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.269181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.269213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.473191 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:06:20.014225832 +0000 UTC Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.749318 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.749563 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.751285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.751318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:53 crc kubenswrapper[4793]: I0217 20:08:53.751334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:54 crc kubenswrapper[4793]: I0217 20:08:54.473663 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:04:07.743439092 +0000 UTC Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.018576 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.018899 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.020532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.020593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.020605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.414069 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.414349 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.415790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.415841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.415854 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.474045 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:17:31.796748997 +0000 UTC Feb 17 20:08:55 crc kubenswrapper[4793]: E0217 20:08:55.626008 4793 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.681380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.681582 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.682632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.682669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:55 crc kubenswrapper[4793]: I0217 20:08:55.682684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.202550 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.267903 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.268023 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.475143 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:52:40.917380112 +0000 UTC Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.635491 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.636651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.636882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:56 crc kubenswrapper[4793]: I0217 20:08:56.636930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:57 crc kubenswrapper[4793]: I0217 20:08:57.476000 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:05:01.762118267 +0000 UTC Feb 17 20:08:58 crc kubenswrapper[4793]: I0217 20:08:58.476833 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:57:28.643601984 +0000 UTC Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.402935 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46806->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.403044 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46806->192.168.126.11:17697: read: connection reset by peer" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.457459 4793 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.476984 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:29:31.38002038 +0000 UTC Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.646904 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.648671 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b" exitCode=255 Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.648738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b"} Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.648894 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.650375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.650401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.650411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.650917 4793 scope.go:117] "RemoveContainer" containerID="b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.735055 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.735582 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.754366 4793 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]log ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]etcd ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-apiextensions-informers ok Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 20:08:59 crc kubenswrapper[4793]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]autoregister-completion ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 20:08:59 crc kubenswrapper[4793]: livez check failed Feb 17 20:08:59 crc kubenswrapper[4793]: I0217 20:08:59.755582 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.477129 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:40:53.798626987 +0000 UTC Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.654291 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.657038 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c"} Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.657264 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.658932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.658985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:00 crc kubenswrapper[4793]: I0217 20:09:00.659003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:01 crc kubenswrapper[4793]: I0217 20:09:01.477951 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:35:22.458081599 +0000 UTC Feb 17 20:09:02 crc kubenswrapper[4793]: I0217 20:09:02.478452 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:38:03.340641751 +0000 UTC Feb 17 20:09:02 crc kubenswrapper[4793]: I0217 20:09:02.588070 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:09:02 crc kubenswrapper[4793]: I0217 20:09:02.588466 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:09:02 crc kubenswrapper[4793]: I0217 20:09:02.589925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:02 crc kubenswrapper[4793]: I0217 20:09:02.589995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:02 crc kubenswrapper[4793]: I0217 20:09:02.590016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.479327 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:48:36.133537037 +0000 UTC Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.756561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.756808 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.757004 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.758607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.758670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.758683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:03 crc kubenswrapper[4793]: I0217 20:09:03.763989 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.480262 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:53:59.222727181 +0000 UTC Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.670374 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.671446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.671508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.671524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:04 crc kubenswrapper[4793]: E0217 20:09:04.733278 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.739029 4793 trace.go:236] Trace[666807888]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 20:08:53.118) (total time: 11620ms): Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[666807888]: ---"Objects listed" error: 11620ms (20:09:04.738) Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[666807888]: [11.620199314s] [11.620199314s] END Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.739107 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.739476 4793 trace.go:236] Trace[567126821]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 20:08:54.525) (total time: 10214ms): Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[567126821]: ---"Objects listed" error: 10214ms (20:09:04.739) Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[567126821]: [10.214223765s] [10.214223765s] END Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.739517 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.740070 4793 trace.go:236] Trace[1707103462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 20:08:52.311) (total time: 12428ms): Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[1707103462]: ---"Objects listed" error: 12428ms (20:09:04.739) Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[1707103462]: [12.428106332s] [12.428106332s] END Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.740097 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.740129 4793 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 20:09:04 crc kubenswrapper[4793]: E0217 20:09:04.741040 4793 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.742712 4793 trace.go:236] Trace[1200391061]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 20:08:54.180) (total time: 10562ms): Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[1200391061]: ---"Objects listed" error: 10561ms (20:09:04.742) Feb 17 20:09:04 crc kubenswrapper[4793]: Trace[1200391061]: [10.562052758s] [10.562052758s] END Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.742768 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 20:09:04 crc kubenswrapper[4793]: I0217 20:09:04.748423 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.426388 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.435080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.445827 4793 apiserver.go:52] "Watching apiserver" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.448648 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.449096 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.449624 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.449760 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.449842 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.449870 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.450554 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.450832 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.451290 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.452408 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.452526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.456110 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.456967 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457059 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457225 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457358 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457508 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457635 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457680 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.457808 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.470811 4793 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.480372 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:55:29.485655265 +0000 UTC Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.495048 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.507861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.517856 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.533581 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.544963 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546224 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546295 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546344 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546398 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546452 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546497 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546586 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546758 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.546769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547024 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547125 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547134 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547212 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547407 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547623 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547928 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.548142 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.548214 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.547534 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.548372 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.548961 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549041 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549141 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549186 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549229 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549272 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549316 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549389 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549510 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549558 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549601 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549646 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549902 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549949 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.549992 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550051 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550144 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550190 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550234 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.548866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550986 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551026 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551090 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551150 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551229 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551243 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551304 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551333 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551404 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551452 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551544 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551584 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551744 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551799 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.551866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.550294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552195 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552242 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552288 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552334 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552464 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552472 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552543 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552836 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552844 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.552995 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553166 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553172 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553198 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553274 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553356 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553414 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553462 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553510 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553555 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553600 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553646 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553729 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553841 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553932 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.553977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554025 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554118 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554166 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554215 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554261 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554305 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554399 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554444 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554496 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554541 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554588 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554632 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554758 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554829 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554884 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554931 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.554978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555024 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555076 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555125 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555171 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555261 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555307 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555398 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555449 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555495 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555540 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555588 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555635 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.555714 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556054 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556091 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556322 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556432 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556703 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.556932 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557003 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557049 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557212 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557242 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557463 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557546 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557832 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.557833 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558020 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558067 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558280 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558353 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558510 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558559 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.558838 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.559150 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.559445 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.559510 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.559579 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560120 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560199 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560590 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560654 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560741 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560840 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560887 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.560934 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561573 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561586 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561650 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561730 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561752 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561780 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561817 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561833 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561870 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561904 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561921 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561938 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.561960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562066 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562092 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562095 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562135 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562155 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562176 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562198 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562216 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562238 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562258 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562305 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562416 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562451 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562469 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562484 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562499 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562514 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562530 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562546 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.562563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.563197 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.563428 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.563625 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.564184 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.564404 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.564899 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.565025 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.565339 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.565599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.565628 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.565667 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566091 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566156 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566107 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566482 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566672 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566788 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566785 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566914 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566952 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.566975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567205 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567025 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567735 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567792 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567894 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.568523 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569122 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567417 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569356 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569379 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.567829 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569441 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569582 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569545 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.569663 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.570031 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.570161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.570199 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.570659 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571326 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571486 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571478 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.571512 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:09:06.071475334 +0000 UTC m=+21.363173685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571670 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571743 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571849 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.570753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572312 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572385 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.571176 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572707 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572753 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572785 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.572843 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573094 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573167 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573224 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573286 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573426 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573461 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573522 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573627 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573723 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573791 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573831 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573847 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573977 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.573953 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574072 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574108 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574170 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574272 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574270 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574362 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574381 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574356 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574414 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574590 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574613 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574602 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574641 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574736 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574765 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574933 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.574991 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575003 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575058 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575123 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575284 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575295 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575183 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575423 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575497 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.575743 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576081 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576151 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576199 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576299 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576305 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576343 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576413 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576440 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576491 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576578 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.576960 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577021 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579391 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577094 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577598 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577752 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577776 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577823 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577864 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.578090 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.578188 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.578458 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.578816 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.577296 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.578903 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.578911 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579021 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579215 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579531 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579487 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579880 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579874 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579917 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579947 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.579973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580005 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580032 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580064 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580117 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.580266 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584359 4793 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584395 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584409 4793 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584425 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584440 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584454 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584647 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584662 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584675 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584713 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584726 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584739 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584750 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584761 4793 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584774 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584787 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584800 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584814 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584826 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584837 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584849 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584860 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584872 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584884 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584896 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584910 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584923 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584936 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584948 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584960 4793 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584973 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584988 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585001 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585015 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585029 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585040 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585053 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585065 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585081 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585094 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585106 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585119 4793 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585130 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585142 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585154 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585165 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585177 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585187 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585199 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585211 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585221 4793 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585231 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585242 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585253 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585264 4793 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585276 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585287 4793 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.582648 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.582851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585316 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.582862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.582933 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583062 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.582681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583304 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583391 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.583474 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583587 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583954 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584006 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.584282 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584330 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.583425 4793 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584512 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585300 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584354 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584459 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584756 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.584927 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585377 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585451 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.585494 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:06.085475325 +0000 UTC m=+21.377173746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585573 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585630 4793 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585651 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585669 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585708 4793 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585725 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585742 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.585822 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:06.085749532 +0000 UTC m=+21.377447963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585844 4793 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585840 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585867 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585887 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585901 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585915 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585930 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585946 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585960 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585975 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585987 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.585998 4793 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586010 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586023 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586035 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586049 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586061 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586074 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586087 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586099 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586112 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586124 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586137 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586149 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586161 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586173 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586185 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586198 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586210 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586223 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586236 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586247 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586258 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586269 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586282 4793 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586294 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586308 4793 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586321 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586341 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586354 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586366 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586378 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586389 4793 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586400 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586411 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586422 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586433 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586448 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586459 4793 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586470 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586482 4793 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586493 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586505 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586516 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586526 4793 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586538 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586551 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586564 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586578 4793 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586593 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586606 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586619 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586632 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586644 4793 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586656 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586668 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586679 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586710 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586723 4793 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586764 4793 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586777 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586788 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586801 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586813 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586823 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586834 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586845 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586861 4793 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586873 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586885 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586879 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586898 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.586978 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587009 4793 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587079 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587105 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587131 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587155 4793 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587181 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587206 4793 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587230 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587254 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587280 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587306 4793 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587332 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587680 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.587841 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.588068 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.588563 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.588595 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.588670 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.588754 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.590468 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.627601 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.628241 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.628413 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.628619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.635591 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.640251 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.640289 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.640310 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.640384 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:06.14036241 +0000 UTC m=+21.432060741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.642437 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.642470 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.642489 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.642498 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.642540 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:06.142522404 +0000 UTC m=+21.434220735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.647889 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.652522 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.656488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.660492 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.665501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.668609 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.674173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.674238 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:09:05 crc kubenswrapper[4793]: E0217 20:09:05.677622 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.678631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.682869 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.687421 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.687817 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.687917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.687848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688010 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688031 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688660 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688676 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688709 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688731 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688746 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688757 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688767 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688775 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688783 4793 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688792 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688801 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688810 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688820 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688830 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688840 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688851 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688862 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688871 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688880 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688888 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688897 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688906 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688914 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688925 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688934 4793 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688943 4793 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688951 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688960 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688969 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688978 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688987 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.688995 4793 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.689004 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.689013 4793 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.689021 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.695601 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.703547 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.706780 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.713007 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.721013 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.724737 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.734373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.745542 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.755773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.767719 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.781153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.782455 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.786388 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.796075 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 20:09:05 crc kubenswrapper[4793]: W0217 20:09:05.796072 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-41fb9eb3a38120d5508e83b4628e70432e16ad3a4c5af51a54692f1f24890ad0 WatchSource:0}: Error finding container 41fb9eb3a38120d5508e83b4628e70432e16ad3a4c5af51a54692f1f24890ad0: Status 404 returned error can't find the container with id 41fb9eb3a38120d5508e83b4628e70432e16ad3a4c5af51a54692f1f24890ad0 Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.796156 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.806612 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.836072 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.857589 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.869613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.883927 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.910915 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.930205 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.941643 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.954797 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.963537 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:05 crc kubenswrapper[4793]: I0217 20:09:05.974215 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.092397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.092487 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.092534 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.092579 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:09:07.092554897 +0000 UTC m=+22.384253208 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.092651 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.092673 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.092735 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:07.092718172 +0000 UTC m=+22.384416473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.092780 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:07.092753572 +0000 UTC m=+22.384451923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.193563 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.193605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193768 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193784 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193795 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193840 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:07.193826924 +0000 UTC m=+22.485525235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193840 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193913 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.193936 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.194027 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:07.194002059 +0000 UTC m=+22.485700400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.481225 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:16:51.205984187 +0000 UTC Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.676612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1acd87f6c5816a4a64e52d19360fa2c4ed2c49c4f2fa7bdedb95976705e6c4eb"} Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.679227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b"} Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.679497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"41fb9eb3a38120d5508e83b4628e70432e16ad3a4c5af51a54692f1f24890ad0"} Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.682792 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c"} Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.682850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897"} Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.682883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"62c8a942e9db7e3ed5d6105459a7b9b12d2f67316010b83e7d1183d66da3a9b9"} Feb 17 20:09:06 crc kubenswrapper[4793]: E0217 20:09:06.696736 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.699286 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.719882 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.736519 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.758076 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.772807 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.789202 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.805600 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.832359 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.846719 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.860619 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.878349 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.900177 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.915901 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.932214 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.945350 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.971888 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:06 crc kubenswrapper[4793]: I0217 20:09:06.988542 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.001684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:06Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.101394 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.101508 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.101531 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.101611 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.101646 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:09:09.101614985 +0000 UTC m=+24.393313296 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.101770 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:09.101753148 +0000 UTC m=+24.393451599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.101778 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.101889 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:09.10185278 +0000 UTC m=+24.393551121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.202851 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.202912 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203141 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203231 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203151 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203262 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203285 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203301 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203575 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:09.203536678 +0000 UTC m=+24.495235059 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.203619 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:09.203599739 +0000 UTC m=+24.495298180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.481657 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:15:23.252938099 +0000 UTC Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.538526 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.538605 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.538661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.538764 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.538844 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:07 crc kubenswrapper[4793]: E0217 20:09:07.538897 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.551894 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.552390 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.553598 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.554395 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.555578 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.556163 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.556789 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.557761 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.558447 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.559384 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.559964 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.561007 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.561508 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.562105 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.563013 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.563547 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.564510 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.564990 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.565534 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.566472 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.566977 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.567951 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.568493 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.569813 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.570307 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.571099 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.572303 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.572968 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.573995 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.574520 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.575377 4793 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.575495 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.577082 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.578061 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.578439 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.579906 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.580533 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.581387 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.582074 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.583035 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.583525 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.584464 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.585121 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.586113 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.586550 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.587416 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.587941 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.588987 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.589432 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.590264 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.590784 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.591668 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.592238 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 20:09:07 crc kubenswrapper[4793]: I0217 20:09:07.592684 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.474605 4793 csr.go:261] certificate signing request csr-8prj5 is approved, waiting to be issued Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.481935 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:44:55.067870374 +0000 UTC Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.499661 4793 csr.go:257] certificate signing request csr-8prj5 is issued Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.689747 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57"} Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.710357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.724308 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.737238 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.754225 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.768937 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.781642 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.798507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.820155 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.838871 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.916118 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9pkqd"] Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.916466 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.918774 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jnwtf"] Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.919230 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:08 crc kubenswrapper[4793]: W0217 20:09:08.919746 4793 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 17 20:09:08 crc kubenswrapper[4793]: E0217 20:09:08.919785 4793 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:09:08 crc kubenswrapper[4793]: W0217 20:09:08.919837 4793 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 17 20:09:08 crc kubenswrapper[4793]: E0217 20:09:08.919854 4793 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:09:08 crc kubenswrapper[4793]: W0217 20:09:08.919882 4793 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 17 20:09:08 crc kubenswrapper[4793]: E0217 20:09:08.919923 4793 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.923656 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.924094 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.924514 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.924788 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.924845 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.953226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.969866 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:08 crc kubenswrapper[4793]: I0217 20:09:08.986572 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.007197 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.018573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a786034-a3c6-4693-965a-3bd39bce6caa-proxy-tls\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.018777 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/db67d891-29db-4ee4-a70c-624cb9af6677-hosts-file\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.018914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7a786034-a3c6-4693-965a-3bd39bce6caa-rootfs\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.019017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7a786034-a3c6-4693-965a-3bd39bce6caa-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.019122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9fm\" (UniqueName: \"kubernetes.io/projected/7a786034-a3c6-4693-965a-3bd39bce6caa-kube-api-access-4f9fm\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.019230 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv9g8\" (UniqueName: \"kubernetes.io/projected/db67d891-29db-4ee4-a70c-624cb9af6677-kube-api-access-sv9g8\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.026479 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.058214 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.075592 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.086279 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.096724 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.107222 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120263 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120340 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7a786034-a3c6-4693-965a-3bd39bce6caa-rootfs\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120364 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7a786034-a3c6-4693-965a-3bd39bce6caa-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9fm\" (UniqueName: \"kubernetes.io/projected/7a786034-a3c6-4693-965a-3bd39bce6caa-kube-api-access-4f9fm\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv9g8\" (UniqueName: \"kubernetes.io/projected/db67d891-29db-4ee4-a70c-624cb9af6677-kube-api-access-sv9g8\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120441 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a786034-a3c6-4693-965a-3bd39bce6caa-proxy-tls\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120473 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/db67d891-29db-4ee4-a70c-624cb9af6677-hosts-file\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.120541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/db67d891-29db-4ee4-a70c-624cb9af6677-hosts-file\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.120823 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.120866 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:13.120855166 +0000 UTC m=+28.412553477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.121004 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.121033 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:13.12102595 +0000 UTC m=+28.412724261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.121175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7a786034-a3c6-4693-965a-3bd39bce6caa-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.121256 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7a786034-a3c6-4693-965a-3bd39bce6caa-rootfs\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.121704 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:09:13.121667406 +0000 UTC m=+28.413365767 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.131670 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7a786034-a3c6-4693-965a-3bd39bce6caa-proxy-tls\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.131650 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.140322 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9fm\" (UniqueName: \"kubernetes.io/projected/7a786034-a3c6-4693-965a-3bd39bce6caa-kube-api-access-4f9fm\") pod \"machine-config-daemon-jnwtf\" (UID: \"7a786034-a3c6-4693-965a-3bd39bce6caa\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.145503 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.163333 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.181565 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.197308 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.208918 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.221668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.221718 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.221842 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.221858 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.221869 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.221908 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:13.221895267 +0000 UTC m=+28.513593578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.222181 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.222195 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.222202 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.222223 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:13.222216975 +0000 UTC m=+28.513915286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.223043 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.249132 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.253187 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: W0217 20:09:09.262736 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a786034_a3c6_4693_965a_3bd39bce6caa.slice/crio-8cca48b14cb79d6de40574e8f5932ee56fdbd52708d36d51ecad96f20c8663a9 WatchSource:0}: Error finding container 8cca48b14cb79d6de40574e8f5932ee56fdbd52708d36d51ecad96f20c8663a9: Status 404 returned error can't find the container with id 8cca48b14cb79d6de40574e8f5932ee56fdbd52708d36d51ecad96f20c8663a9 Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.271111 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.292049 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.312328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.333187 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kpl4r"] Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.333726 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.335228 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n2fmv"] Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.335919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ztwxl"] Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.336323 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.336641 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.338281 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.338400 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.338643 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.338815 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.338898 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.339551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.339671 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.339759 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.339855 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.343505 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.343606 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.348577 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.348797 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.348853 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.358661 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.370422 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.381553 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.396515 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.410483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.422866 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-log-socket\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423207 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-k8s-cni-cncf-io\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423225 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-conf-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-multus-certs\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423258 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423275 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-var-lib-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423290 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-node-log\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423355 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-kube-api-access-w4qp9\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423373 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-env-overrides\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423426 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/61219b87-834d-490d-bf8e-1657a4081739-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423441 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-cnibin\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-hostroot\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423503 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-systemd-units\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-netd\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423536 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-cnibin\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423580 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-etc-kubernetes\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-kubelet\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423611 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-ovn\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-system-cni-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423732 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-script-lib\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423752 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-netns\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-systemd\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423788 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423861 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61219b87-834d-490d-bf8e-1657a4081739-cni-binary-copy\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-cni-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423979 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-etc-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.423993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-os-release\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-netns\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424036 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-system-cni-dir\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-bin\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-cni-bin\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424101 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-daemon-config\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424113 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-cni-binary-copy\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424138 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndkg\" (UniqueName: \"kubernetes.io/projected/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-kube-api-access-8ndkg\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424152 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-slash\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424180 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-os-release\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424198 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-cni-multus\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424211 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-kubelet\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424225 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d6vm\" (UniqueName: \"kubernetes.io/projected/61219b87-834d-490d-bf8e-1657a4081739-kube-api-access-4d6vm\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424248 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-socket-dir-parent\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.424263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-config\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.461427 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.482320 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:59:22.839356866 +0000 UTC Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.490569 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.501106 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 20:04:08 +0000 UTC, rotation deadline is 2026-11-05 19:54:53.42395708 +0000 UTC Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.501348 4793 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6263h45m43.922611923s for next certificate rotation Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525048 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-cni-binary-copy\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525249 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ndkg\" (UniqueName: \"kubernetes.io/projected/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-kube-api-access-8ndkg\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525323 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-slash\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-kubelet\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525461 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-kubelet\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-os-release\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525562 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-cni-multus\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-config\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525427 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-slash\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d6vm\" (UniqueName: \"kubernetes.io/projected/61219b87-834d-490d-bf8e-1657a4081739-kube-api-access-4d6vm\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-cni-multus\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525712 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-socket-dir-parent\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525731 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-multus-certs\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-log-socket\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-k8s-cni-cncf-io\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-conf-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-node-log\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525804 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525820 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-var-lib-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525836 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-kube-api-access-w4qp9\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525854 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-netd\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-env-overrides\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-conf-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/61219b87-834d-490d-bf8e-1657a4081739-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525942 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-cnibin\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525964 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-hostroot\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-systemd-units\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-ovn\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526030 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-cnibin\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526056 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-etc-kubernetes\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-kubelet\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526085 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-cnibin\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526109 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-script-lib\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526118 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-multus-certs\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-system-cni-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-k8s-cni-cncf-io\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-netns\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526190 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-systemd\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526183 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-config\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526258 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61219b87-834d-490d-bf8e-1657a4081739-cni-binary-copy\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-var-lib-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526328 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-cni-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526360 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-netd\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-etc-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526416 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-system-cni-dir\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526433 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-os-release\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526448 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-netns\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526464 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-cni-bin\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526478 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-bin\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-system-cni-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526492 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526533 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-daemon-config\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526794 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526833 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526144 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-log-socket\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526863 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-run-netns\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526905 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-systemd\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-ovn\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-script-lib\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526980 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-hostroot\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-systemd-units\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-cnibin\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-etc-kubernetes\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527105 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-kubelet\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-host-var-lib-cni-bin\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-daemon-config\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527171 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-netns\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527204 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-bin\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526395 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-etc-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527245 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-openvswitch\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-os-release\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527297 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-cni-dir\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527309 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-system-cni-dir\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.525847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-multus-socket-dir-parent\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.526284 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-node-log\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527463 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/61219b87-834d-490d-bf8e-1657a4081739-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-env-overrides\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/61219b87-834d-490d-bf8e-1657a4081739-cni-binary-copy\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.527894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/61219b87-834d-490d-bf8e-1657a4081739-os-release\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.528039 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-cni-binary-copy\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.532018 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.538144 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.538218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.538251 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.538338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.538339 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:09 crc kubenswrapper[4793]: E0217 20:09:09.538570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.539980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.546271 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d6vm\" (UniqueName: \"kubernetes.io/projected/61219b87-834d-490d-bf8e-1657a4081739-kube-api-access-4d6vm\") pod \"multus-additional-cni-plugins-kpl4r\" (UID: \"61219b87-834d-490d-bf8e-1657a4081739\") " pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.546421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-kube-api-access-w4qp9\") pod \"ovnkube-node-n2fmv\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.548286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ndkg\" (UniqueName: \"kubernetes.io/projected/b2b13cca-b775-4fc5-8ad8-41bfd70c857c-kube-api-access-8ndkg\") pod \"multus-ztwxl\" (UID: \"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\") " pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.550555 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.560507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.571883 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.583051 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.603218 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.630546 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.646682 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.655432 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.663149 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ztwxl" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.667645 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.672416 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.680342 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: W0217 20:09:09.685747 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b13cca_b775_4fc5_8ad8_41bfd70c857c.slice/crio-a54e55875579548cd0bec76bc3e570a80c1300a2b04ed7d776cd51a36531dd9b WatchSource:0}: Error finding container a54e55875579548cd0bec76bc3e570a80c1300a2b04ed7d776cd51a36531dd9b: Status 404 returned error can't find the container with id a54e55875579548cd0bec76bc3e570a80c1300a2b04ed7d776cd51a36531dd9b Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.700984 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.701987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerStarted","Data":"a54e55875579548cd0bec76bc3e570a80c1300a2b04ed7d776cd51a36531dd9b"} Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.703023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerStarted","Data":"e9a9f2839e539ac3c7f2166a6e00295cf4ca495e5e4fe7c8846ffb62f10ac6e4"} Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.706219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38"} Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.706247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7"} Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.706258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"8cca48b14cb79d6de40574e8f5932ee56fdbd52708d36d51ecad96f20c8663a9"} Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.713266 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.728839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.744307 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.748388 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.759237 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.771083 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.787783 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.800376 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.811331 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.822282 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.837535 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.854670 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.871189 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.884573 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.896300 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.908681 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.925094 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.937005 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:09 crc kubenswrapper[4793]: I0217 20:09:09.964873 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.012507 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.044630 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: E0217 20:09:10.150130 4793 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.164860 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 20:09:10 crc kubenswrapper[4793]: E0217 20:09:10.171109 4793 projected.go:194] Error preparing data for projected volume kube-api-access-sv9g8 for pod openshift-dns/node-resolver-9pkqd: failed to sync configmap cache: timed out waiting for the condition Feb 17 20:09:10 crc kubenswrapper[4793]: E0217 20:09:10.171185 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db67d891-29db-4ee4-a70c-624cb9af6677-kube-api-access-sv9g8 podName:db67d891-29db-4ee4-a70c-624cb9af6677 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:10.671165056 +0000 UTC m=+25.962863367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sv9g8" (UniqueName: "kubernetes.io/projected/db67d891-29db-4ee4-a70c-624cb9af6677-kube-api-access-sv9g8") pod "node-resolver-9pkqd" (UID: "db67d891-29db-4ee4-a70c-624cb9af6677") : failed to sync configmap cache: timed out waiting for the condition Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.245182 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.324998 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.483470 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:27:06.577355927 +0000 UTC Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.710633 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerStarted","Data":"0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177"} Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.711991 4793 generic.go:334] "Generic (PLEG): container finished" podID="61219b87-834d-490d-bf8e-1657a4081739" containerID="0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11" exitCode=0 Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.712055 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerDied","Data":"0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11"} Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.713437 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b" exitCode=0 Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.713498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b"} Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.713553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"f9b142c9d7d5d063f7f322fe857382f60a4c54c27c62ab30578b42d0418823d9"} Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.728814 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.744726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv9g8\" (UniqueName: \"kubernetes.io/projected/db67d891-29db-4ee4-a70c-624cb9af6677-kube-api-access-sv9g8\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.747305 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.756368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv9g8\" (UniqueName: \"kubernetes.io/projected/db67d891-29db-4ee4-a70c-624cb9af6677-kube-api-access-sv9g8\") pod \"node-resolver-9pkqd\" (UID: \"db67d891-29db-4ee4-a70c-624cb9af6677\") " pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.764458 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.782357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.795843 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.812430 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.828756 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.843673 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.865255 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.890152 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.915994 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.940464 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.965942 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.981512 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:10 crc kubenswrapper[4793]: I0217 20:09:10.998388 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:10Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.012824 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.032742 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.037020 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9pkqd" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.054458 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.069057 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.089292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.101561 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.113622 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.129929 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.141798 4793 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.143804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.143843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.143855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.143945 4793 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.146635 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.150875 4793 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.151168 4793 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.152159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.152186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.152195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.152211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.152222 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.169499 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.171564 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.180326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.180372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.180384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.180400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.180415 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.191147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.197941 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.201267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.201315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.201363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.201384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.201396 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.213878 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.217059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.217087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.217095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.217125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.217135 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.224678 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.230209 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.236089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.236116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.236126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.236138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.236147 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.249192 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.249334 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.251982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.252010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.252020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.252033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.252043 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.259879 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.354965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.355366 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.355374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.355389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.355403 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.458778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.458815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.458830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.458846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.458858 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.483614 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:47:44.091061139 +0000 UTC Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.538933 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.538985 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.539074 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.539100 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.539153 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:11 crc kubenswrapper[4793]: E0217 20:09:11.539199 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.562070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.562108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.562117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.562138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.562149 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.664930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.664973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.664990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.665011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.665023 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.718497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9pkqd" event={"ID":"db67d891-29db-4ee4-a70c-624cb9af6677","Type":"ContainerStarted","Data":"d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.718547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9pkqd" event={"ID":"db67d891-29db-4ee4-a70c-624cb9af6677","Type":"ContainerStarted","Data":"36fb5fb47cc45c0c33685a5814e4f92c59af1fd6bc1db3d56c9eefed6bd93e8b"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.721126 4793 generic.go:334] "Generic (PLEG): container finished" podID="61219b87-834d-490d-bf8e-1657a4081739" containerID="7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f" exitCode=0 Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.721216 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerDied","Data":"7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.730512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.730602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.730625 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.730644 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.730660 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.730676 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.735951 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.758383 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.767970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.768030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.768043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.768061 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.768073 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.771152 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.787766 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.805513 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.821179 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.838947 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.850583 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.862636 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.872193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.872228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.872237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.872252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.872263 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.874590 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.887125 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.898383 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.919963 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.938272 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.949621 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.958379 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.972453 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.973990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.974021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.974032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.974045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.974054 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:11Z","lastTransitionTime":"2026-02-17T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:11 crc kubenswrapper[4793]: I0217 20:09:11.989293 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:11Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.020803 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.061284 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.075901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.075931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.075938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.075952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.075962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.103859 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.148877 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.178882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.178964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.178977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.178994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.179006 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.184149 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.230154 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.270035 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.281207 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.281243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.281255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.281271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.281283 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.285123 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-z9xgq"] Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.285798 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.302910 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.312589 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.332715 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.352736 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.362012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3073cc3b-c430-444d-b751-5e1416feafa9-host\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.362103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvzf\" (UniqueName: \"kubernetes.io/projected/3073cc3b-c430-444d-b751-5e1416feafa9-kube-api-access-gjvzf\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.362213 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3073cc3b-c430-444d-b751-5e1416feafa9-serviceca\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.372569 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.383770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.383821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.383833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.383851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.383863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.430584 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.463019 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjvzf\" (UniqueName: \"kubernetes.io/projected/3073cc3b-c430-444d-b751-5e1416feafa9-kube-api-access-gjvzf\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.463083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3073cc3b-c430-444d-b751-5e1416feafa9-serviceca\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.463109 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3073cc3b-c430-444d-b751-5e1416feafa9-host\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.463179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3073cc3b-c430-444d-b751-5e1416feafa9-host\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.464160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3073cc3b-c430-444d-b751-5e1416feafa9-serviceca\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.464808 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.483927 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:24:22.179472059 +0000 UTC Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.485824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.485890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.485904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.485925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.485938 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.491489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjvzf\" (UniqueName: \"kubernetes.io/projected/3073cc3b-c430-444d-b751-5e1416feafa9-kube-api-access-gjvzf\") pod \"node-ca-z9xgq\" (UID: \"3073cc3b-c430-444d-b751-5e1416feafa9\") " pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.522727 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.561585 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.588922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.588960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.588970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.588985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.588996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.608679 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z9xgq" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.613475 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.639651 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.686264 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.693477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.693507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.693516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.693528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.693537 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.722523 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.738929 4793 generic.go:334] "Generic (PLEG): container finished" podID="61219b87-834d-490d-bf8e-1657a4081739" containerID="1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10" exitCode=0 Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.738992 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerDied","Data":"1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.741507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z9xgq" event={"ID":"3073cc3b-c430-444d-b751-5e1416feafa9","Type":"ContainerStarted","Data":"2882d8648b284cb3428143f981b63d12f063af50315706918c2edc7d7d38cbae"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.761265 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.797003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.797044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.797055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.797070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.797081 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.799444 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.839620 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.880171 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.899621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.899651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.899659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.899675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.899702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:12Z","lastTransitionTime":"2026-02-17T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.921302 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:12 crc kubenswrapper[4793]: I0217 20:09:12.967548 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:12Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.002489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.002528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.002537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.002552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.002561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.003061 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.055457 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.088900 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.104917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.104945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.104954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.104967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.104977 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.119833 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.160614 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.169924 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.170077 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:09:21.170035587 +0000 UTC m=+36.461733908 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.170131 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.170178 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.170283 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.170306 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.170350 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:21.170332704 +0000 UTC m=+36.462031025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.170369 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:21.170360665 +0000 UTC m=+36.462058986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.207946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.207972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.207979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.207991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.208003 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.208728 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.242289 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.271169 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.271217 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271355 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271374 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271388 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271427 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271464 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271479 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271437 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:21.271422656 +0000 UTC m=+36.563120977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.271562 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:21.271540999 +0000 UTC m=+36.563239420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.287441 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.310425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.310478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.310503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.310535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.310558 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.324849 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.366291 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.408429 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.413242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.413327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.413350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.413382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.413405 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.449078 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.480954 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.484068 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:41:30.196042955 +0000 UTC Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.516818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.516868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.516882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.516900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.516915 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.531101 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.538234 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.538304 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.538433 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.538467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.538712 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:13 crc kubenswrapper[4793]: E0217 20:09:13.538881 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.568301 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.602282 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.618773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.618818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.618829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.618845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.618856 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.646043 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.688238 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.722838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.723285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.723468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.723646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.723830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.750374 4793 generic.go:334] "Generic (PLEG): container finished" podID="61219b87-834d-490d-bf8e-1657a4081739" containerID="43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937" exitCode=0 Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.750444 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerDied","Data":"43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.753375 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z9xgq" event={"ID":"3073cc3b-c430-444d-b751-5e1416feafa9","Type":"ContainerStarted","Data":"51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.772511 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.798538 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.813085 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.827802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.827880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.827906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.827937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.827960 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.844558 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.882916 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.923633 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.932408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.932453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.932468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.932485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.932497 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:13Z","lastTransitionTime":"2026-02-17T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:13 crc kubenswrapper[4793]: I0217 20:09:13.965013 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.001997 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.035356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.035436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.035458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.035485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.035506 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.045577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.085816 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.122373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.137461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.137499 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.137510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.137526 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.137536 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.178657 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.202741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.240041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.240076 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.240087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.240103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.240114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.265000 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.287147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.324030 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.345873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.345932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.345950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.345975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.345993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.362646 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.406125 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.448308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.448353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.448363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.448378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.448390 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.450702 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.484717 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:40:06.606190632 +0000 UTC Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.486488 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.530603 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.551127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.551163 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.551176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.551190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.551201 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.606758 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.624448 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.648612 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.654210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.654240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.654251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.654266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.654277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.680329 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.731120 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.757868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.757922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.757940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.757964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.757983 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.761421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.765871 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.766557 4793 generic.go:334] "Generic (PLEG): container finished" podID="61219b87-834d-490d-bf8e-1657a4081739" containerID="b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0" exitCode=0 Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.766737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerDied","Data":"b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.807772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.842864 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.860265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.860305 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.860317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.860332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.860343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.885247 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.932871 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.960497 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:14Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.962329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.962360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.962368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.962383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:14 crc kubenswrapper[4793]: I0217 20:09:14.962395 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:14Z","lastTransitionTime":"2026-02-17T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.013210 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.049801 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.065259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.065345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.065371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.065404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.065422 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.083860 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.121746 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.162957 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.167750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.167902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.167990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.168149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.168268 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.204299 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.246942 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.271290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.271347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.271365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.271389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.271407 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.281473 4793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.288297 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc/status\": read tcp 38.102.83.155:50176->38.102.83.155:6443: use of closed network connection" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.326781 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.363741 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.374169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.374194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.374204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.374217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.374227 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.409410 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.442004 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.476768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.476806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.476817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.476834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.476845 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.485088 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:24:39.921869333 +0000 UTC Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.487621 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.538317 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:15 crc kubenswrapper[4793]: E0217 20:09:15.538512 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.538632 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:15 crc kubenswrapper[4793]: E0217 20:09:15.538787 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.538937 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:15 crc kubenswrapper[4793]: E0217 20:09:15.539016 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.560926 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.583774 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.584341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.584420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.584443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.584474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.584498 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.606808 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.646241 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.682354 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.686761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.686792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.686803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.686819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.686830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.725309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.762838 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.775999 4793 generic.go:334] "Generic (PLEG): container finished" podID="61219b87-834d-490d-bf8e-1657a4081739" containerID="8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f" exitCode=0 Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.776044 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerDied","Data":"8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.792210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.792283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.792307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.792337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.792358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.811412 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.847664 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.883733 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.895777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.895810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.895820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.895834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.895843 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.926066 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.964265 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.998428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.998454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.998462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.998475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:15 crc kubenswrapper[4793]: I0217 20:09:15.998484 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:15Z","lastTransitionTime":"2026-02-17T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.016457 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.042225 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.098974 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.100858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.100889 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.100900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.100926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.100941 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.125577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.165939 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.203593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.203620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.203628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.203641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.203649 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.221629 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.250823 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.289752 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.305850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.305874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.305882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.305896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.305905 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.324267 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.369718 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.403258 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.408605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.408633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.408643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.408663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.408674 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.442084 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.484882 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.485954 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:46:38.470393405 +0000 UTC Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.512427 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.512477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.512501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.512533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.512556 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.526989 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.561990 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.608060 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.614914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.614961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.614975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.614994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.615009 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.639043 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.681405 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.717488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.717532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.717550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.717575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.717594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.786224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.786849 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.795268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" event={"ID":"61219b87-834d-490d-bf8e-1657a4081739","Type":"ContainerStarted","Data":"9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.807084 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.820925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.821004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.821028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.821069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.821095 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.831439 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.833702 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.850929 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.873397 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.890512 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.923383 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.924086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.924141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.924159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.924183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.924200 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:16Z","lastTransitionTime":"2026-02-17T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:16 crc kubenswrapper[4793]: I0217 20:09:16.967358 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:16Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.011988 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.026398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.026445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.026462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.026483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.026499 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.044211 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.082137 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.129227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.129277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.129295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.129322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.129346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.134371 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.162760 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.203745 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.232410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.232484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.232538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.232573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.232637 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.246437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.290099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.325204 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.339216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.339316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.339346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.339379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.339406 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.366900 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.418425 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.442669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.442752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.442765 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.442783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.442795 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.452830 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.486578 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.486628 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:01:37.435697933 +0000 UTC Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.522845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.537935 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.538032 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.538115 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:17 crc kubenswrapper[4793]: E0217 20:09:17.538112 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:17 crc kubenswrapper[4793]: E0217 20:09:17.538217 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:17 crc kubenswrapper[4793]: E0217 20:09:17.538351 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.544534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.544590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.544609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.544631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.544650 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.570832 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.608483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.639384 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.647138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.647215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.647240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.647272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.647297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.697723 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.731009 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.750211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.750282 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.750303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.750329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.750347 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.769562 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.798308 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.798886 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.805630 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.830790 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.854094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.854159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.854177 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.854204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.854222 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.858273 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.880829 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.925479 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.957593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.957631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.957640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.957655 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.957666 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:17Z","lastTransitionTime":"2026-02-17T20:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:17 crc kubenswrapper[4793]: I0217 20:09:17.963511 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:17Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.012361 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.043435 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.060084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.060112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.060120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.060132 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.060143 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.081433 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.137195 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.163238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.163293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.163308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.163326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.163341 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.172786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.222210 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.245482 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.271362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.271435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.271453 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.271479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.271497 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.290324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.330463 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.367882 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.373947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.374006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.374025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.374055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.374073 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.405893 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.452466 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.476943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.476988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.477001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.477018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.477031 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.485007 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:18Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.486941 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:54:41.436178684 +0000 UTC Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.580010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.580056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.580066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.580079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.580088 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.682898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.682970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.682991 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.683017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.683037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.785321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.785363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.785375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.785391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.785403 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.800240 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.887764 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.887814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.887827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.887845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.887856 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.990585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.990644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.990661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.990711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:18 crc kubenswrapper[4793]: I0217 20:09:18.990731 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:18Z","lastTransitionTime":"2026-02-17T20:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.093396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.093449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.093464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.093482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.093494 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.197114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.197166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.197182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.197205 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.197221 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.303724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.303780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.303793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.303816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.303831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.414849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.414903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.414919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.414938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.414954 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.487706 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:19:12.80848773 +0000 UTC Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.517306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.517361 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.517373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.517391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.517403 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.538520 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.538533 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:19 crc kubenswrapper[4793]: E0217 20:09:19.538726 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:19 crc kubenswrapper[4793]: E0217 20:09:19.538763 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.538527 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:19 crc kubenswrapper[4793]: E0217 20:09:19.538838 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.620482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.620538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.620554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.620573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.620587 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.723529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.723586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.723603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.723627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.723644 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.806559 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/0.log" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.809980 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251" exitCode=1 Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.810037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.811235 4793 scope.go:117] "RemoveContainer" containerID="9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.826737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.826813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.826837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.826904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.826928 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.836256 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.853509 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.875574 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.895245 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.909232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.929666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.929754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.929800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.929826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.929846 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:19Z","lastTransitionTime":"2026-02-17T20:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.929850 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.954225 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.971767 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:19 crc kubenswrapper[4793]: I0217 20:09:19.985031 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:19Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.009903 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.026662 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.032542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.032581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.032597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.032619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.032632 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.041215 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.053004 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.072279 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:19Z\\\",\\\"message\\\":\\\":19.494003 6083 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 20:09:19.494010 6083 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 20:09:19.494024 6083 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 20:09:19.494023 6083 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 20:09:19.494060 6083 factory.go:656] Stopping watch factory\\\\nI0217 20:09:19.494078 6083 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 20:09:19.494081 6083 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 20:09:19.494092 6083 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 20:09:19.494084 6083 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 20:09:19.494102 6083 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 20:09:19.494124 6083 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 20:09:19.494214 6083 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 20:09:19.494394 6083 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.084920 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.135199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.135228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.135236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.135250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.135260 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.265308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.265378 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.265395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.265420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.265441 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.368623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.368677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.368713 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.368734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.368749 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.470997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.471351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.471368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.471424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.471441 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.488244 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:42:51.85954857 +0000 UTC Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.573734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.573776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.573789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.573805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.573817 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.675950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.675997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.676011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.676031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.676047 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.778883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.778937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.778956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.778978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.778996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.815500 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/0.log" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.818440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.818549 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.835853 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.848912 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.863373 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.879015 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.881588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.881652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.881676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.881756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.881783 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.896814 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.915346 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.934984 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.949094 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.976117 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:19Z\\\",\\\"message\\\":\\\":19.494003 6083 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 20:09:19.494010 6083 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 20:09:19.494024 6083 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 20:09:19.494023 6083 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 20:09:19.494060 6083 factory.go:656] Stopping watch factory\\\\nI0217 20:09:19.494078 6083 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 20:09:19.494081 6083 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 20:09:19.494092 6083 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 20:09:19.494084 6083 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 20:09:19.494102 6083 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 20:09:19.494124 6083 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 20:09:19.494214 6083 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 20:09:19.494394 6083 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.985321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.985396 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.985424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.985458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.985485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:20Z","lastTransitionTime":"2026-02-17T20:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:20 crc kubenswrapper[4793]: I0217 20:09:20.986534 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:20Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.012350 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.028050 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.045773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.060098 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.072681 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.087356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.087405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.087417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.087436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.087449 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.175489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.175595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.175659 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:09:37.175637076 +0000 UTC m=+52.467335397 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.175711 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.175754 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.175773 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.175808 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:37.17579759 +0000 UTC m=+52.467495901 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.175825 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:37.17581821 +0000 UTC m=+52.467516521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.189926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.189972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.189987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.190006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.190019 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.275202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.275251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.275262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.275280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.275292 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.276760 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.276791 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276905 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276926 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276937 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276932 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276975 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276992 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.276976 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:37.276963654 +0000 UTC m=+52.568661965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.277064 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:09:37.277045246 +0000 UTC m=+52.568743587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.287404 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.291137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.291176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.291184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.291199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.291212 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.302155 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.305872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.305901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.305912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.305925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.305934 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.317319 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.320373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.320405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.320416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.320431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.320448 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.331014 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.334846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.334893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.334904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.334920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.334929 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.347069 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.347192 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.348621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.348674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.348723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.348749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.348766 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.451360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.451391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.451401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.451415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.451424 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.489221 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:12:49.336023706 +0000 UTC Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.538077 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.538120 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.538099 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.538274 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.538199 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.538414 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.554102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.554149 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.554160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.554181 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.554200 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.657533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.657770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.657808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.657841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.657864 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.761290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.761341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.761352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.761368 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.761380 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.824680 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/1.log" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.830522 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/0.log" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.834556 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421" exitCode=1 Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.834613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.834658 4793 scope.go:117] "RemoveContainer" containerID="9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.837147 4793 scope.go:117] "RemoveContainer" containerID="f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421" Feb 17 20:09:21 crc kubenswrapper[4793]: E0217 20:09:21.837571 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.860054 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.867136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.867247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.867267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.867806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.867862 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.884449 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.899063 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.934890 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:19Z\\\",\\\"message\\\":\\\":19.494003 6083 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 20:09:19.494010 6083 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 20:09:19.494024 6083 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 20:09:19.494023 6083 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 20:09:19.494060 6083 factory.go:656] Stopping watch factory\\\\nI0217 20:09:19.494078 6083 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 20:09:19.494081 6083 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 20:09:19.494092 6083 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 20:09:19.494084 6083 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 20:09:19.494102 6083 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 20:09:19.494124 6083 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 20:09:19.494214 6083 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 20:09:19.494394 6083 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.950947 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.970936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.970983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.970994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.971011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.971027 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:21Z","lastTransitionTime":"2026-02-17T20:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.975149 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:21 crc kubenswrapper[4793]: I0217 20:09:21.997090 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:21Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.017591 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.031679 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.047287 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.061095 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.070516 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7"] Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.070981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.073049 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.073127 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.073935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.073965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.073997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.074012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.074022 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.083106 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.100023 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.112180 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.124612 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.139167 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.156145 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.171360 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.176312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.176362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.176380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.176403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.176419 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.185821 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b672111-22f9-4dd8-b116-385907278ad1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.185866 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j42l\" (UniqueName: \"kubernetes.io/projected/2b672111-22f9-4dd8-b116-385907278ad1-kube-api-access-2j42l\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.185892 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b672111-22f9-4dd8-b116-385907278ad1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.185916 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b672111-22f9-4dd8-b116-385907278ad1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.188852 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.201119 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.215384 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.227321 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.253467 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.271109 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.278917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.278952 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.278967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.278987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.279002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.286594 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b672111-22f9-4dd8-b116-385907278ad1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.286647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j42l\" (UniqueName: \"kubernetes.io/projected/2b672111-22f9-4dd8-b116-385907278ad1-kube-api-access-2j42l\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.286714 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b672111-22f9-4dd8-b116-385907278ad1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.286751 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b672111-22f9-4dd8-b116-385907278ad1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.287315 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b672111-22f9-4dd8-b116-385907278ad1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.287557 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b672111-22f9-4dd8-b116-385907278ad1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.293127 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.295790 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b672111-22f9-4dd8-b116-385907278ad1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.308170 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.315780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j42l\" (UniqueName: \"kubernetes.io/projected/2b672111-22f9-4dd8-b116-385907278ad1-kube-api-access-2j42l\") pod \"ovnkube-control-plane-749d76644c-cl2v7\" (UID: \"2b672111-22f9-4dd8-b116-385907278ad1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.334405 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:19Z\\\",\\\"message\\\":\\\":19.494003 6083 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 20:09:19.494010 6083 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 20:09:19.494024 6083 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 20:09:19.494023 6083 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 20:09:19.494060 6083 factory.go:656] Stopping watch factory\\\\nI0217 20:09:19.494078 6083 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 20:09:19.494081 6083 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 20:09:19.494092 6083 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 20:09:19.494084 6083 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 20:09:19.494102 6083 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 20:09:19.494124 6083 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 20:09:19.494214 6083 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 20:09:19.494394 6083 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.350309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.366422 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.381707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.381974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.382059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.382195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.382275 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.385005 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.394728 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.399671 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: W0217 20:09:22.409825 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b672111_22f9_4dd8_b116_385907278ad1.slice/crio-c41a6377b5a94f0696ba23f681c467d00ada186484b8722219ec308b8b76a49f WatchSource:0}: Error finding container c41a6377b5a94f0696ba23f681c467d00ada186484b8722219ec308b8b76a49f: Status 404 returned error can't find the container with id c41a6377b5a94f0696ba23f681c467d00ada186484b8722219ec308b8b76a49f Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.484906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.484975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.484986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.485002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.485012 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.490390 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:08:52.111863978 +0000 UTC Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.587905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.587945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.587955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.587971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.587982 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.690313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.690341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.690349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.690362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.690373 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.792649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.792705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.792720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.792740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.792755 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.839372 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" event={"ID":"2b672111-22f9-4dd8-b116-385907278ad1","Type":"ContainerStarted","Data":"7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.839423 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" event={"ID":"2b672111-22f9-4dd8-b116-385907278ad1","Type":"ContainerStarted","Data":"39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.839440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" event={"ID":"2b672111-22f9-4dd8-b116-385907278ad1","Type":"ContainerStarted","Data":"c41a6377b5a94f0696ba23f681c467d00ada186484b8722219ec308b8b76a49f"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.841785 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/1.log" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.844452 4793 scope.go:117] "RemoveContainer" containerID="f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421" Feb 17 20:09:22 crc kubenswrapper[4793]: E0217 20:09:22.844631 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.860536 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.874553 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.886351 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.895090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.895126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.895137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.895154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.895165 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.907198 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.921216 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.935845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.947774 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.967419 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f1200f1be71bc7aefbd86e4498bec601b56379395d6d22553c6bf4582a11251\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:19Z\\\",\\\"message\\\":\\\":19.494003 6083 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 20:09:19.494010 6083 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 20:09:19.494024 6083 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 20:09:19.494023 6083 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 20:09:19.494060 6083 factory.go:656] Stopping watch factory\\\\nI0217 20:09:19.494078 6083 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 20:09:19.494081 6083 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 20:09:19.494092 6083 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 20:09:19.494084 6083 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 20:09:19.494102 6083 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 20:09:19.494124 6083 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 20:09:19.494214 6083 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 20:09:19.494394 6083 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.977196 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.987839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.997969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.998030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.998048 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.998072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:22 crc kubenswrapper[4793]: I0217 20:09:22.998091 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:22Z","lastTransitionTime":"2026-02-17T20:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:22.999990 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:22Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.010520 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.021533 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.035040 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.047578 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.066137 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.078041 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.094487 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.100518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.100560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.100572 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.100590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.100603 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.131477 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.158233 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.172400 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.180728 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6trvs"] Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.181259 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.181331 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.186478 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.202512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.202543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.202550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.202562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.202570 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.205761 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.216580 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.226358 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.235204 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.251668 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.261722 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.271720 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.280726 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.294863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9nz\" (UniqueName: \"kubernetes.io/projected/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-kube-api-access-2q9nz\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.294934 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.296797 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.305044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.305075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.305085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.305099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.305110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.308076 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.318112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.326786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.340101 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.352225 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.364476 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.373099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.389137 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.395618 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q9nz\" (UniqueName: \"kubernetes.io/projected/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-kube-api-access-2q9nz\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.395751 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.395855 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.395904 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:09:23.895892222 +0000 UTC m=+39.187590533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.400142 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.408313 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.408343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.408354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.408369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.408409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.415459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q9nz\" (UniqueName: \"kubernetes.io/projected/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-kube-api-access-2q9nz\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.424309 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.442268 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.456671 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.468008 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.477749 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.488124 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.490513 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:10:22.472467573 +0000 UTC Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.498529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.510098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.510128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.510145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.510160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.510172 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.511046 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.521842 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.538274 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.538283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.538404 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.538382 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.538551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.538606 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.612931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.613033 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.613057 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.613082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.613099 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.715656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.715786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.715809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.715837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.715859 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.819317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.819386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.819410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.819443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.819468 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.902813 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.903039 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:23 crc kubenswrapper[4793]: E0217 20:09:23.903155 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:09:24.903127638 +0000 UTC m=+40.194826039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.921598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.921725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.921790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.921816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:23 crc kubenswrapper[4793]: I0217 20:09:23.921831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:23Z","lastTransitionTime":"2026-02-17T20:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.025420 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.025500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.025518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.025544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.025564 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.131525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.132166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.132196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.132227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.132250 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.234730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.235010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.235116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.235206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.235302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.338648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.338755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.338783 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.338814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.338838 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.442262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.442335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.442358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.442387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.442410 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.490951 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:25:49.684490651 +0000 UTC Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.537949 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:24 crc kubenswrapper[4793]: E0217 20:09:24.538118 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.545260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.545297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.545309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.545324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.545336 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.647539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.647584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.647600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.647621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.647635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.749556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.749594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.749607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.749625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.749636 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.851242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.851270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.851278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.851289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.851297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.914814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:24 crc kubenswrapper[4793]: E0217 20:09:24.914973 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:24 crc kubenswrapper[4793]: E0217 20:09:24.915071 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:09:26.915045267 +0000 UTC m=+42.206743618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.953741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.953770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.953805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.953819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:24 crc kubenswrapper[4793]: I0217 20:09:24.953827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:24Z","lastTransitionTime":"2026-02-17T20:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.056982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.057046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.057067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.057130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.057150 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.160643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.160750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.160778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.160806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.160827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.264823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.264892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.264907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.264930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.264948 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.368978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.369062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.369085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.369111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.369129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.473022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.473067 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.473079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.473095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.473108 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.491485 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:12:47.581473442 +0000 UTC Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.538154 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.538210 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.538245 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:25 crc kubenswrapper[4793]: E0217 20:09:25.538292 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:25 crc kubenswrapper[4793]: E0217 20:09:25.538433 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:25 crc kubenswrapper[4793]: E0217 20:09:25.538551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.554628 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.570892 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.575115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.575193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.575211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.575235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.575251 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.584149 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.601410 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.613903 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.626774 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.645258 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.659420 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.674361 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.679853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.680045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.680071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.680113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.680136 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.688672 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.700402 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.713922 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.738301 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.759013 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.774715 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.783537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.783618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.783645 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.783678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.783813 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.786850 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.803131 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:25Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.885959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.886000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.886014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.886031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.886043 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.989845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.989884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.989895 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.989909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:25 crc kubenswrapper[4793]: I0217 20:09:25.989918 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:25Z","lastTransitionTime":"2026-02-17T20:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.092152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.092191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.092199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.092259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.092272 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.196246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.196311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.196329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.196353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.196370 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.299363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.299429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.299451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.299483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.299504 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.402183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.402267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.402289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.402317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.402339 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.491590 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:50:19.077177339 +0000 UTC Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.505716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.505776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.505798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.505827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.505848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.538768 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:26 crc kubenswrapper[4793]: E0217 20:09:26.538972 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.608875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.608934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.608951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.608973 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.608990 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.713021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.713120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.713141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.713219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.713249 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.817104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.817194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.817212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.817236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.817253 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.920774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.920851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.920871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.920894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.920911 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:26Z","lastTransitionTime":"2026-02-17T20:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:26 crc kubenswrapper[4793]: I0217 20:09:26.937832 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:26 crc kubenswrapper[4793]: E0217 20:09:26.937988 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:26 crc kubenswrapper[4793]: E0217 20:09:26.938078 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:09:30.938056663 +0000 UTC m=+46.229755014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.023638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.023734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.023763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.023791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.023814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.127492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.127553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.127576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.127605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.127627 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.231405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.231503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.231533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.231570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.231594 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.335030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.335075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.335086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.335103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.335114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.439006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.439080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.439099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.439126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.439144 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.492911 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:18:13.335029721 +0000 UTC Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.538566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.538630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.538673 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:27 crc kubenswrapper[4793]: E0217 20:09:27.538809 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:27 crc kubenswrapper[4793]: E0217 20:09:27.538947 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:27 crc kubenswrapper[4793]: E0217 20:09:27.539159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.542603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.542653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.542676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.542751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.542775 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.646294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.646504 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.646576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.646606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.646626 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.750430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.750930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.751165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.751545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.751883 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.855896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.855998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.856022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.856046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.856067 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.959649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.960356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.960629 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.960966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:27 crc kubenswrapper[4793]: I0217 20:09:27.961149 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:27Z","lastTransitionTime":"2026-02-17T20:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.064063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.064440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.064578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.064743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.064869 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.167120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.167389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.167477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.167558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.167639 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.271102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.271131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.271140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.271152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.271179 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.373057 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.373109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.373129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.373152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.373167 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.476258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.476317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.476342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.476372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.476396 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.493517 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:38:17.304934368 +0000 UTC Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.537882 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:28 crc kubenswrapper[4793]: E0217 20:09:28.538333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.579510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.579874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.579923 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.579949 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.579968 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.683009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.683078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.683099 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.683128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.683145 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.785964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.786358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.786775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.787160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.787374 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.890923 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.890996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.891019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.891048 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.891071 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.993660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.993749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.993774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.993805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:28 crc kubenswrapper[4793]: I0217 20:09:28.993829 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:28Z","lastTransitionTime":"2026-02-17T20:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.097029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.097385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.097643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.097932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.098164 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.202023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.202359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.202417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.202448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.202470 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.305985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.306040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.306059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.306081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.306098 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.409038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.409104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.409117 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.409144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.409159 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.494620 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:10:00.103567544 +0000 UTC Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.512301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.512376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.512392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.512414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.512427 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.538809 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.538851 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.538859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:29 crc kubenswrapper[4793]: E0217 20:09:29.539047 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:29 crc kubenswrapper[4793]: E0217 20:09:29.539176 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:29 crc kubenswrapper[4793]: E0217 20:09:29.539333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.615103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.615159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.615176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.615199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.615215 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.718303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.718357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.718370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.718389 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.718401 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.820231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.820270 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.820279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.820293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.820302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.923309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.923352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.923363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.923379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:29 crc kubenswrapper[4793]: I0217 20:09:29.923390 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:29Z","lastTransitionTime":"2026-02-17T20:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.025266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.025336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.025354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.025379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.025397 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.128447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.128510 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.128528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.128551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.128568 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.230735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.230771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.230782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.230798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.230809 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.333530 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.333594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.333616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.333646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.333669 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.437091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.437145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.437153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.437169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.437180 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.496366 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:03:56.350484024 +0000 UTC Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.537803 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:30 crc kubenswrapper[4793]: E0217 20:09:30.537934 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.540003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.540030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.540040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.540051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.540061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.643421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.643521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.643553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.643591 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.643614 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.752164 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.752212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.752229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.752252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.752269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.854756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.854793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.854804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.854819 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.854832 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.957724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.957772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.957784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.957801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.957812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:30Z","lastTransitionTime":"2026-02-17T20:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:30 crc kubenswrapper[4793]: I0217 20:09:30.983601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:30 crc kubenswrapper[4793]: E0217 20:09:30.983917 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:30 crc kubenswrapper[4793]: E0217 20:09:30.984035 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:09:38.983998314 +0000 UTC m=+54.275696655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.061268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.061344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.061371 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.061404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.061428 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.165133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.165206 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.165228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.165252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.165269 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.268088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.268144 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.268161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.268184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.268202 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.370843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.370897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.370906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.370920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.370930 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.474065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.474133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.474150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.474173 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.474190 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.497132 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:31:43.914843221 +0000 UTC Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.537672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.537794 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.537802 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.537912 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.538026 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.538146 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.551000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.551051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.551083 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.551108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.551128 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.571766 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:31Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.577303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.577365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.577388 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.577416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.577438 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.599335 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:31Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.605051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.605105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.605121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.605145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.605172 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.625568 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:31Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.632018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.632082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.632104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.632134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.632157 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.650443 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:31Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.654126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.654170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.654182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.654203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.654216 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.671724 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:31Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:31 crc kubenswrapper[4793]: E0217 20:09:31.671841 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.674162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.674194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.674203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.674216 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.674226 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.777404 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.777440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.777450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.777464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.777474 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.879417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.879469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.879486 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.879508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.879522 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.982035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.982070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.982082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.982098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:31 crc kubenswrapper[4793]: I0217 20:09:31.982110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:31Z","lastTransitionTime":"2026-02-17T20:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.085153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.085210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.085227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.085251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.085268 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.188356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.188391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.188402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.188419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.188432 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.291237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.291552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.291775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.292023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.292163 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.395018 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.395093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.395111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.395136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.395152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.497274 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:26:15.993119443 +0000 UTC Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.498743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.498787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.498805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.498830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.498848 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.538624 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:32 crc kubenswrapper[4793]: E0217 20:09:32.538881 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.602016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.602068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.602082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.602102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.602116 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.704463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.704513 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.704535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.704562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.704582 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.807335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.807408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.807432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.807460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.807477 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.910276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.910623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.910857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.911165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:32 crc kubenswrapper[4793]: I0217 20:09:32.911416 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:32Z","lastTransitionTime":"2026-02-17T20:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.014231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.014310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.014333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.014367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.014388 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.117456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.117519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.117531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.117550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.117562 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.221053 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.221451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.222474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.222650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.222815 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.325667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.325938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.325959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.325984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.326002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.429349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.429407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.429425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.429448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.429464 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.497671 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:33:28.821331222 +0000 UTC Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.532214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.532381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.532468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.532546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.532638 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.538540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.538560 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.538861 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:33 crc kubenswrapper[4793]: E0217 20:09:33.538682 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:33 crc kubenswrapper[4793]: E0217 20:09:33.539240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:33 crc kubenswrapper[4793]: E0217 20:09:33.539410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.636051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.636207 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.636235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.636265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.636287 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.739897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.739977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.739996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.740021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.740039 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.843003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.843043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.843058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.843078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.843093 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.946128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.946196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.946213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.946238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:33 crc kubenswrapper[4793]: I0217 20:09:33.946256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:33Z","lastTransitionTime":"2026-02-17T20:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.049610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.049660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.049677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.049730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.049748 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.153059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.153120 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.153139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.153167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.153192 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.257382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.257519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.257543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.257568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.257585 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.361169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.361201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.361210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.361222 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.361230 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.464744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.465159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.465329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.465483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.465605 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.498594 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:07:08.038950712 +0000 UTC Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.538600 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:34 crc kubenswrapper[4793]: E0217 20:09:34.538975 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.568641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.568755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.568782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.568815 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.568838 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.671578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.671643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.671668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.671730 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.671754 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.774166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.774512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.774809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.775042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.775249 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.878188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.878238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.878250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.878267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.878278 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.981939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.981969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.981978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.981992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:34 crc kubenswrapper[4793]: I0217 20:09:34.982001 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:34Z","lastTransitionTime":"2026-02-17T20:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.024140 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.037914 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.038016 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.076194 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.084575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.084606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.084615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.084628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.084637 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.095400 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.118740 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.131415 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.153983 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.167641 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.178973 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.187462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.187531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.187543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.187556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.187566 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.191189 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.203201 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.218754 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.232817 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.254657 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.266772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.279439 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.290568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.290599 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.290607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.290620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.290646 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.292577 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.306678 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.393737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.393779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.393788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.393805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.393817 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.497130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.497217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.497234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.497283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.497300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.499258 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:52:26.427683083 +0000 UTC Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.538115 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.538149 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.538194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:35 crc kubenswrapper[4793]: E0217 20:09:35.538349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:35 crc kubenswrapper[4793]: E0217 20:09:35.538503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:35 crc kubenswrapper[4793]: E0217 20:09:35.538643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.554796 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.569350 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.586383 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.599353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.599413 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.599431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.599455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.599473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.606446 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.620259 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.640456 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.654057 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.667221 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.678125 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.693864 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.701922 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.701948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.701956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.701985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.701996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.710184 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.724024 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.730484 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.732210 4793 scope.go:117] "RemoveContainer" containerID="f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.745589 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.766188 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.792292 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.804181 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.805012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.805059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.805075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.805098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.805115 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.831985 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.859485 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:35Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.907678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.907740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.907752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.907768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:35 crc kubenswrapper[4793]: I0217 20:09:35.907780 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:35Z","lastTransitionTime":"2026-02-17T20:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.010056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.010124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.010146 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.010171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.010230 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.112944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.112986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.113002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.113023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.113035 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.216111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.216154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.216168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.216188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.216203 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.318298 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.318331 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.318344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.318362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.318375 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.421406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.421449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.421465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.421511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.421523 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.499516 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:40:04.523781794 +0000 UTC Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.523750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.523793 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.523806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.523823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.523835 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.538174 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:36 crc kubenswrapper[4793]: E0217 20:09:36.538321 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.626470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.626520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.626536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.626557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.626572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.728917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.728961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.728974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.728992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.729004 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.831465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.831503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.831514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.831531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.831542 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.899305 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/1.log" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.903070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.904069 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.926290 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.935161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.935225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.935269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.935304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.935328 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:36Z","lastTransitionTime":"2026-02-17T20:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.946541 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.967287 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:36 crc kubenswrapper[4793]: I0217 20:09:36.989806 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.008243 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.025858 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.037874 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.037939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.037961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.037985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.038003 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.040798 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.061190 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.081177 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.098158 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.113773 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.131637 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.140752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.140821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.140848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.140881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.140906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.161802 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.178232 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.212605 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.232826 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.243934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.244026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.244055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.244090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.244115 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.254804 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.255395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.255531 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.255592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.255646 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.255666 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:10:09.255625728 +0000 UTC m=+84.547324079 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.255753 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:10:09.25572662 +0000 UTC m=+84.547424971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.255772 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.255845 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:10:09.255820772 +0000 UTC m=+84.547519113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.273631 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.347059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.347131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.347147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.347170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.347184 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.357117 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.357182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357386 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357438 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357460 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357401 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357565 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357587 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357536 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:10:09.35750759 +0000 UTC m=+84.649205941 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.357720 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:10:09.357666934 +0000 UTC m=+84.649365285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.450507 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.450545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.450556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.450570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.450581 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.500218 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:56:19.221724255 +0000 UTC Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.538768 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.538943 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.539040 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.538787 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.539267 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.539304 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.552811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.552881 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.552915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.552939 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.552958 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.655944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.656010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.656039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.656068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.656089 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.758705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.758766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.758778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.758798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.758812 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.862039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.862112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.862134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.862159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.862176 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.909351 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/2.log" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.910449 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/1.log" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.914886 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a" exitCode=1 Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.914941 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.915027 4793 scope.go:117] "RemoveContainer" containerID="f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.916138 4793 scope.go:117] "RemoveContainer" containerID="743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a" Feb 17 20:09:37 crc kubenswrapper[4793]: E0217 20:09:37.916416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.930942 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.955778 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.967135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.967377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.967545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.967641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.968040 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:37Z","lastTransitionTime":"2026-02-17T20:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.977245 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:37 crc kubenswrapper[4793]: I0217 20:09:37.997638 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:37Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.012187 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.030849 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.044862 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.055981 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.078528 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.078590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.078609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.078627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.078640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.103392 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.118956 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.143367 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f456a9579b920edfa769a7b86efdbffa8b58addb6a24d9e1c5b4ccb5ecfd5421\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:20Z\\\",\\\"message\\\":\\\"\\\\nI0217 20:09:20.684710 6224 services_controller.go:360] Finished syncing service metrics on namespace openshift-network-operator for network=default : 343.649µs\\\\nI0217 20:09:20.684720 6224 services_controller.go:356] Processing sync for service openshift-operator-lifecycle-manager/packageserver-service for network=default\\\\nI0217 20:09:20.684638 6224 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.154786 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.172070 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.181643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.181681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.181719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.181737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.181750 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.185715 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.197946 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.209980 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.221215 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.231814 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.284008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.284054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.284064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.284080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.284096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.386844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.386906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.386929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.386957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.386980 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.489734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.489796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.489817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.489847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.489869 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.500616 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:33:22.646306464 +0000 UTC Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.538633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:38 crc kubenswrapper[4793]: E0217 20:09:38.538896 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.592984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.593070 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.593097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.593168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.593191 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.695471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.695542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.695565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.695594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.695616 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.799122 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.799166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.799175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.799191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.799200 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.902419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.902498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.902523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.902553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.902572 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:38Z","lastTransitionTime":"2026-02-17T20:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.921398 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/2.log" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.925551 4793 scope.go:117] "RemoveContainer" containerID="743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a" Feb 17 20:09:38 crc kubenswrapper[4793]: E0217 20:09:38.925856 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.945439 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.963809 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.981074 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:38 crc kubenswrapper[4793]: I0217 20:09:38.992311 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:38Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.006341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.006393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.006407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.006423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.006433 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.023945 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.047244 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.072885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:39 crc kubenswrapper[4793]: E0217 20:09:39.073045 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:39 crc kubenswrapper[4793]: E0217 20:09:39.073098 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:09:55.073083095 +0000 UTC m=+70.364781396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.072967 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.085928 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.102204 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.109723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.109784 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.109810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.109838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.109860 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.118654 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.136197 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.152329 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.170092 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.190403 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.207114 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.212803 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.212847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.212867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.212891 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.212909 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.222379 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.237178 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.254678 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:39Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.315338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.315418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.315430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.315448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.315460 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.418502 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.418544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.418555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.418573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.418584 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.501519 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:39:47.102988301 +0000 UTC Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.521604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.521636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.521654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.521709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.521727 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.538255 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.538308 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.538315 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:39 crc kubenswrapper[4793]: E0217 20:09:39.538408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:39 crc kubenswrapper[4793]: E0217 20:09:39.538529 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:39 crc kubenswrapper[4793]: E0217 20:09:39.538558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.623971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.624021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.624034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.624051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.624066 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.726466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.726505 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.726518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.726537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.726548 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.828405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.828598 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.828621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.828644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.828663 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.930739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.930820 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.930842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.930873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:39 crc kubenswrapper[4793]: I0217 20:09:39.930897 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:39Z","lastTransitionTime":"2026-02-17T20:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.033088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.033426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.033435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.033450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.033460 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.135592 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.135638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.135650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.135673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.135702 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.238821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.238886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.238909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.238938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.238962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.342401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.342760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.342795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.342825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.342847 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.445657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.445716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.445725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.445740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.445749 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.502026 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:09:48.048625284 +0000 UTC Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.538610 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:40 crc kubenswrapper[4793]: E0217 20:09:40.538889 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.548000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.548034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.548045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.548062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.548075 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.650142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.650386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.650618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.650839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.650944 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.753844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.753897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.753906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.753919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.753929 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.856846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.857200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.857436 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.857615 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.857832 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.960583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.960633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.960644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.960661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:40 crc kubenswrapper[4793]: I0217 20:09:40.960676 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:40Z","lastTransitionTime":"2026-02-17T20:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.063074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.063118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.063127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.063140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.063148 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.165775 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.165847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.165864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.165887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.165906 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.269503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.269568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.269585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.269609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.269627 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.372650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.372750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.372770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.372799 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.372818 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.476320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.476380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.476400 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.476423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.476440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.503120 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:42:33.326770547 +0000 UTC Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.538267 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.538309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.538309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:41 crc kubenswrapper[4793]: E0217 20:09:41.538447 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:41 crc kubenswrapper[4793]: E0217 20:09:41.538563 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:41 crc kubenswrapper[4793]: E0217 20:09:41.538874 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.580021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.580370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.580561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.580786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.580988 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.684032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.684449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.684614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.684817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.684981 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.788594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.788658 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.788676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.788741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.788781 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.891814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.891875 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.891893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.891916 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.891932 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.995320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.995385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.995406 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.995435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:41 crc kubenswrapper[4793]: I0217 20:09:41.995454 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:41Z","lastTransitionTime":"2026-02-17T20:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.044488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.044544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.044570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.044602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.044625 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.060233 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:42Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.064359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.064407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.064416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.064431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.064442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.081958 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:42Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.086259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.086319 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.086333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.086359 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.086373 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.103348 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:42Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.107498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.107544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.107561 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.107581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.107595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.125087 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:42Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.129593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.129642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.129654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.129675 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.129710 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.147835 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:42Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.147953 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.150169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.150233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.150252 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.150279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.150295 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.253746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.253796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.253806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.253823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.253833 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.356534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.356577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.356586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.356603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.356615 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.460096 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.460150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.460175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.460209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.460233 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.503345 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:55:54.475057434 +0000 UTC Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.538767 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:42 crc kubenswrapper[4793]: E0217 20:09:42.538967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.562637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.562801 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.562812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.562829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.562841 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.665386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.665441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.665459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.665482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.665499 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.768333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.768379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.768391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.768408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.768420 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.871287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.871344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.871367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.871398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.871417 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.974115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.974155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.974168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.974187 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:42 crc kubenswrapper[4793]: I0217 20:09:42.974202 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:42Z","lastTransitionTime":"2026-02-17T20:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.076739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.076806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.076823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.076849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.076866 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.179988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.180027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.180041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.180062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.180076 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.282590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.282647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.282664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.282705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.282721 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.385276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.385343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.385367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.385395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.385417 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.488178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.488246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.488263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.488289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.488324 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.503928 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:45:30.09682408 +0000 UTC Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.538549 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:43 crc kubenswrapper[4793]: E0217 20:09:43.538831 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.538578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:43 crc kubenswrapper[4793]: E0217 20:09:43.539043 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.538550 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:43 crc kubenswrapper[4793]: E0217 20:09:43.539233 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.591360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.591416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.591429 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.591451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.591464 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.695133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.695246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.695264 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.695283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.695680 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.799023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.799074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.799089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.799109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.799124 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.902215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.902280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.902301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.902326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:43 crc kubenswrapper[4793]: I0217 20:09:43.902346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:43Z","lastTransitionTime":"2026-02-17T20:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.004643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.004760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.004778 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.004800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.004820 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.108242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.108334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.108357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.108386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.108409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.211514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.211593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.211620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.211650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.211669 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.314002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.314038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.314081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.314098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.314105 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.417343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.417403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.417419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.417442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.417459 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.505368 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:37:10.35797752 +0000 UTC Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.520852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.520918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.520935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.520958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.520975 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.538831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:44 crc kubenswrapper[4793]: E0217 20:09:44.539104 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.624063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.624145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.624168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.624198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.624223 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.727140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.727189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.727197 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.727211 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.727223 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.830288 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.830341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.830358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.830383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.830401 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.933201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.933277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.933296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.933320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:44 crc kubenswrapper[4793]: I0217 20:09:44.933338 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:44Z","lastTransitionTime":"2026-02-17T20:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.036515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.036617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.036638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.036736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.036763 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.139394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.139431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.139440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.139456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.139464 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.242498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.242563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.242581 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.242606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.242624 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.346451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.346500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.346509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.346529 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.346540 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.448759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.448797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.448810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.448828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.448841 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.505832 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:39:11.111061082 +0000 UTC Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.537664 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.537659 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:45 crc kubenswrapper[4793]: E0217 20:09:45.537822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.537864 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:45 crc kubenswrapper[4793]: E0217 20:09:45.537988 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:45 crc kubenswrapper[4793]: E0217 20:09:45.538130 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.551156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.551242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.551272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.551306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.551334 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.559511 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.575890 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.596044 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.618509 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.641770 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.653792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.653850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.653868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.653901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.653922 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.661915 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.683211 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.699962 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.722288 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.738417 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.756795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.756863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.756757 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.756882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.757071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.757101 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.777206 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.792794 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.819849 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.830990 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.859162 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.859235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.859262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.859292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.859315 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.859324 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.875769 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.890398 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:45Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.961855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.961908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.961920 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.961940 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:45 crc kubenswrapper[4793]: I0217 20:09:45.961953 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:45Z","lastTransitionTime":"2026-02-17T20:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.065042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.065090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.065105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.065123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.065140 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.167831 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.167941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.167966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.167995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.168018 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.270806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.270876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.270892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.270911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.270924 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.373568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.373828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.373903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.373971 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.374063 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.477478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.477521 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.477532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.477547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.477558 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.507128 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:26:17.391205373 +0000 UTC Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.537717 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:46 crc kubenswrapper[4793]: E0217 20:09:46.537896 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.580434 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.580523 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.580537 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.580559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.580578 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.683853 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.683915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.683932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.683954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.683973 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.788372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.788459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.788483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.788517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.788539 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.891403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.891498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.891517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.891539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.891558 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.994011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.994069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.994087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.994111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:46 crc kubenswrapper[4793]: I0217 20:09:46.994132 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:46Z","lastTransitionTime":"2026-02-17T20:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.097446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.097518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.097539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.097568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.097586 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.200668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.200792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.200848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.200876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.200893 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.303219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.303301 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.303326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.303357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.303381 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.406266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.406326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.406349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.406380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.406400 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.507335 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:58:48.017371479 +0000 UTC Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.508867 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.508964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.508990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.509020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.509042 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.538417 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.538493 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:47 crc kubenswrapper[4793]: E0217 20:09:47.538568 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:47 crc kubenswrapper[4793]: E0217 20:09:47.538654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.538996 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:47 crc kubenswrapper[4793]: E0217 20:09:47.539206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.612118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.612464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.612975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.613125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.613313 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.717369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.717816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.717977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.718124 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.718257 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.828671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.828897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.828976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.829069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.829131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.930902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.930932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.930941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.930955 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:47 crc kubenswrapper[4793]: I0217 20:09:47.930963 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:47Z","lastTransitionTime":"2026-02-17T20:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.032931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.032972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.032983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.032997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.033008 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.135220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.135571 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.135808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.136314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.136655 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.239276 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.239638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.239938 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.240161 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.240309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.359058 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.359151 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.359435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.359476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.359490 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.461635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.461706 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.461718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.461732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.461740 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.508457 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:46:52.639639133 +0000 UTC Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.537908 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:48 crc kubenswrapper[4793]: E0217 20:09:48.538088 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.563851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.563901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.563913 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.563932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.563946 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.667829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.667885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.667902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.667924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.667941 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.770880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.771356 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.771394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.771430 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.771452 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.873350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.873731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.873927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.874304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.874615 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.976965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.976998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.977009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.977025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:48 crc kubenswrapper[4793]: I0217 20:09:48.977037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:48Z","lastTransitionTime":"2026-02-17T20:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.079945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.079987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.080003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.080023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.080039 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.183142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.183174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.183185 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.183200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.183210 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.285792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.285822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.285834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.285848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.285859 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.388351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.388780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.389068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.389580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.390079 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.493265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.493357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.493372 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.493402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.493469 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.509186 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:31:14.731949351 +0000 UTC Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.538672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.538761 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.538823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:49 crc kubenswrapper[4793]: E0217 20:09:49.538969 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:49 crc kubenswrapper[4793]: E0217 20:09:49.539052 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:49 crc kubenswrapper[4793]: E0217 20:09:49.539148 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.595402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.595435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.595446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.595461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.595472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.697636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.697670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.697679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.697709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.697719 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.800186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.800232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.800247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.800268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.800284 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.902937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.902998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.903010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.903029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:49 crc kubenswrapper[4793]: I0217 20:09:49.903046 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:49Z","lastTransitionTime":"2026-02-17T20:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.006338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.006382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.006399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.006423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.006440 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.109116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.109158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.109170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.109188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.109201 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.212092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.212130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.212147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.212169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.212186 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.315312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.315355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.315363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.315382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.315391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.418744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.418789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.418807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.418829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.418846 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.510885 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:49:29.311564735 +0000 UTC Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.521446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.521518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.521538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.521567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.521586 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.538043 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:50 crc kubenswrapper[4793]: E0217 20:09:50.538234 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.624497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.624578 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.624595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.624630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.624646 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.727567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.727620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.727637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.727661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.727677 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.829954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.830327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.830338 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.830354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.830366 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.932634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.932984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.933125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.933383 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:50 crc kubenswrapper[4793]: I0217 20:09:50.934351 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:50Z","lastTransitionTime":"2026-02-17T20:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.036385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.036422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.036432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.036449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.036460 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.139048 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.139279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.139412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.139551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.139724 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.242905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.242979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.242999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.243028 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.243047 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.345947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.346322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.346554 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.346787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.346948 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.449445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.449669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.449840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.449950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.450044 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.511995 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:10:07.594693075 +0000 UTC Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.538282 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.538337 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.538282 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:51 crc kubenswrapper[4793]: E0217 20:09:51.538416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:51 crc kubenswrapper[4793]: E0217 20:09:51.538529 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:51 crc kubenswrapper[4793]: E0217 20:09:51.538641 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.552752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.552786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.552796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.552809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.552819 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.655043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.655100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.655121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.655147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.655166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.757709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.757739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.757749 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.757761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.757770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.860282 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.860339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.860357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.860380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.860397 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.963439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.963503 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.963515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.963531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:51 crc kubenswrapper[4793]: I0217 20:09:51.963541 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:51Z","lastTransitionTime":"2026-02-17T20:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.066047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.066085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.066095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.066111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.066122 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.168627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.168666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.168677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.168710 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.168723 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.270733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.270759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.270766 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.270798 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.270807 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.372833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.372894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.372919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.372948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.372969 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.475564 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.475610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.475622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.475640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.475653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.492945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.493073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.493200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.493245 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.493310 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.512393 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:25:46.427393327 +0000 UTC Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.513431 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:52Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.517186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.517237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.517247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.517265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.517277 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.529254 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:52Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.533046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.533118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.533135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.533156 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.533173 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.538078 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.538247 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.546801 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:52Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.550722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.550773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.550791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.550814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.550831 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.568967 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:52Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.573166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.573202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.573214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.573233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.573244 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.588956 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:52Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:52 crc kubenswrapper[4793]: E0217 20:09:52.589302 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.590642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.590668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.590679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.590718 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.590730 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.692751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.692813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.692827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.692841 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.692852 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.795325 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.795353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.795362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.795375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.795385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.897653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.897679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.897702 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.897714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.897724 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.999352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.999550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.999635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.999716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:52 crc kubenswrapper[4793]: I0217 20:09:52.999780 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:52Z","lastTransitionTime":"2026-02-17T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.102863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.102924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.102943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.102972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.102989 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.205333 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.205393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.205407 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.205423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.205436 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.307419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.307455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.307466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.307483 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.307494 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.409835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.409876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.409890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.409906 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.409917 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.512088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.512131 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.512139 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.512172 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.512182 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.512500 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:28:25.896569614 +0000 UTC Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.538453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.538465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.538495 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:53 crc kubenswrapper[4793]: E0217 20:09:53.538984 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:53 crc kubenswrapper[4793]: E0217 20:09:53.539133 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:53 crc kubenswrapper[4793]: E0217 20:09:53.539209 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.539591 4793 scope.go:117] "RemoveContainer" containerID="743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a" Feb 17 20:09:53 crc kubenswrapper[4793]: E0217 20:09:53.539870 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.614738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.614770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.614779 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.614791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.614799 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.717643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.717677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.717709 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.717725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.717736 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.819663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.819721 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.819729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.819743 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.819755 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.923246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.923292 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.923304 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.923323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:53 crc kubenswrapper[4793]: I0217 20:09:53.923337 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:53Z","lastTransitionTime":"2026-02-17T20:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.024968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.025011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.025023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.025037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.025047 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.126929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.126960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.126967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.126979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.126988 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.229225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.229260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.229281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.229293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.229301 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.331748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.331814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.331832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.331859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.331877 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.433642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.433676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.433715 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.433732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.433741 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.513270 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:23:37.935647303 +0000 UTC Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.535975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.536005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.536015 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.536029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.536038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.538216 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:54 crc kubenswrapper[4793]: E0217 20:09:54.538319 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.638168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.638232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.638249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.638271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.638287 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.740790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.740863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.740887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.740911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.740934 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.843753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.843812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.843850 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.843879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.843900 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.947616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.947643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.947651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.947663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:54 crc kubenswrapper[4793]: I0217 20:09:54.947672 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:54Z","lastTransitionTime":"2026-02-17T20:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.050606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.050647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.050660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.050676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.050703 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.122850 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:55 crc kubenswrapper[4793]: E0217 20:09:55.123049 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:55 crc kubenswrapper[4793]: E0217 20:09:55.123127 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:10:27.123106825 +0000 UTC m=+102.414805196 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.153218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.153246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.153254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.153267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.153276 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.256002 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.256035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.256044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.256055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.256064 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.358956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.359086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.359105 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.359127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.359144 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.462127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.462170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.462182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.462201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.462212 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.513552 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:38:07.694852918 +0000 UTC Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.538548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.538606 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.538545 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:55 crc kubenswrapper[4793]: E0217 20:09:55.538680 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:55 crc kubenswrapper[4793]: E0217 20:09:55.538845 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:55 crc kubenswrapper[4793]: E0217 20:09:55.538974 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.552875 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.565315 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.565362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.565379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.565401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.565420 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.567677 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.583025 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.599991 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.615634 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.629944 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.643613 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.655557 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.665712 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.672432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.672573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.672648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.672748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.672830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.676884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.687547 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.701269 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.710357 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.721322 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.731287 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.752472 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.761396 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.777640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.777707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.777722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.777740 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.777754 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.781336 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:55Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.880141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.880188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.880196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.880215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.880226 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.982116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.982148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.982157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.982169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:55 crc kubenswrapper[4793]: I0217 20:09:55.982178 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:55Z","lastTransitionTime":"2026-02-17T20:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.084596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.084663 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.084674 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.084719 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.084739 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.186555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.186596 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.186605 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.186620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.186630 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.289200 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.289234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.289243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.289257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.289267 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.391512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.391548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.391559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.391576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.391591 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.495395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.495461 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.495479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.495515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.495533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.513810 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:58:18.914359362 +0000 UTC Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.538516 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:56 crc kubenswrapper[4793]: E0217 20:09:56.538663 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.598405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.598464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.598481 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.598508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.598525 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.700890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.700925 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.700935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.700950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.700961 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.803364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.803412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.803421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.803433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.803442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.905622 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.905651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.905659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.905673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.905697 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:56Z","lastTransitionTime":"2026-02-17T20:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.984786 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/0.log" Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.984834 4793 generic.go:334] "Generic (PLEG): container finished" podID="b2b13cca-b775-4fc5-8ad8-41bfd70c857c" containerID="0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177" exitCode=1 Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.984866 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerDied","Data":"0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177"} Feb 17 20:09:56 crc kubenswrapper[4793]: I0217 20:09:56.985210 4793 scope.go:117] "RemoveContainer" containerID="0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.000911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:56Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.007265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.007289 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.007297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.007310 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.007322 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.018962 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.040310 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.059858 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.077678 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.092182 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.109683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.109777 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.109797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.109824 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.109846 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.111099 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.125596 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.142513 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.163529 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.175552 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.196121 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.206271 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.211899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.211932 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.211941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.211956 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.211964 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.228008 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.247089 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.265747 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.283859 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.301737 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:57Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.315014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.315087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.315097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.315111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.315120 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.418114 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.418150 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.418158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.418171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.418179 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.514818 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:29:02.619884336 +0000 UTC Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.521072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.521119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.521135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.521157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.521170 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.538450 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.538521 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.538460 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:57 crc kubenswrapper[4793]: E0217 20:09:57.538643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:57 crc kubenswrapper[4793]: E0217 20:09:57.538580 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:57 crc kubenswrapper[4793]: E0217 20:09:57.538785 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.623363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.623395 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.623403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.623415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.623424 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.725857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.725926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.725945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.725975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.725992 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.828833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.828871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.828883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.828897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.828911 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.931428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.931488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.931511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.931543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.931561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:57Z","lastTransitionTime":"2026-02-17T20:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.989816 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/0.log" Feb 17 20:09:57 crc kubenswrapper[4793]: I0217 20:09:57.989875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerStarted","Data":"beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.005226 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.019516 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.032803 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.034065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.034098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.034111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.034128 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.034139 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.053107 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.065101 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.078177 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.092069 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.102441 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.115028 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.126258 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.136111 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.136511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.136610 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.136676 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.136780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.136842 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.144761 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.160328 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.168841 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.188445 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.200476 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.209375 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.220818 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:58Z is after 2025-08-24T17:21:41Z" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.239175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.239201 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.239209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.239237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.239247 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.341935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.341968 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.341976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.341989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.341997 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.444182 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.444210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.444218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.444230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.444238 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.515364 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:02:50.971076546 +0000 UTC Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.538182 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:09:58 crc kubenswrapper[4793]: E0217 20:09:58.538340 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.546750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.546813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.546830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.546855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.546873 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.648994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.649047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.649063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.649085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.649102 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.751229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.751271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.751285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.751302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.751313 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.853627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.853668 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.853677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.853704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.853715 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.956390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.956450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.956466 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.956488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:58 crc kubenswrapper[4793]: I0217 20:09:58.956504 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:58Z","lastTransitionTime":"2026-02-17T20:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.058585 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.058659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.058680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.058729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.058748 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.160961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.161020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.161035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.161059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.161074 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.263355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.263414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.263426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.263444 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.263456 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.366630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.366681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.366717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.366735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.366746 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.468934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.469069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.469093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.469125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.469147 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.515543 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:26:38.843374523 +0000 UTC Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.537971 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.538022 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.538001 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:09:59 crc kubenswrapper[4793]: E0217 20:09:59.538162 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:09:59 crc kubenswrapper[4793]: E0217 20:09:59.538273 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:09:59 crc kubenswrapper[4793]: E0217 20:09:59.538387 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.571308 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.571379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.571402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.571433 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.571456 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.674328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.674392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.674415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.674446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.674467 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.776416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.776462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.776477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.776531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.776549 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.878707 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.878746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.878757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.878771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.878785 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.981398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.981439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.981450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.981463 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:09:59 crc kubenswrapper[4793]: I0217 20:09:59.981477 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:09:59Z","lastTransitionTime":"2026-02-17T20:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.084567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.084650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.084672 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.084736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.084755 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.187098 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.187157 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.187180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.187213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.187237 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.289967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.290049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.290115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.290147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.290169 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.393888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.393961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.393982 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.394011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.394034 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.497584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.497634 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.497648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.497666 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.497680 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.516420 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:12:19.094427797 +0000 UTC Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.538135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:00 crc kubenswrapper[4793]: E0217 20:10:00.538385 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.555066 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.600414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.600462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.600471 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.600487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.600500 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.711565 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.711613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.711630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.711653 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.711672 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.814962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.815019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.815037 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.815060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.815076 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.918901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.918960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.918977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.919000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:00 crc kubenswrapper[4793]: I0217 20:10:00.919016 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:00Z","lastTransitionTime":"2026-02-17T20:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.022068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.022212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.022233 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.022261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.022279 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.125860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.125972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.125988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.126006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.126020 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.228234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.228297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.228316 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.228342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.228360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.331724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.331773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.331790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.331809 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.331825 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.436154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.436231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.436250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.436273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.436291 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.517115 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:28:48.680918339 +0000 UTC Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.537781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.537791 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:01 crc kubenswrapper[4793]: E0217 20:10:01.538020 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.538167 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:01 crc kubenswrapper[4793]: E0217 20:10:01.538204 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:01 crc kubenswrapper[4793]: E0217 20:10:01.538433 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.539963 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.540032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.540049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.540496 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.540556 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.644066 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.644130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.644152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.644179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.644203 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.747323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.747370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.747392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.747422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.747442 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.850497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.850546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.850558 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.850576 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.850592 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.953006 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.953073 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.953113 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.953145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:01 crc kubenswrapper[4793]: I0217 20:10:01.953167 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:01Z","lastTransitionTime":"2026-02-17T20:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.055941 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.055984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.055999 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.056022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.056040 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.158757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.159178 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.159199 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.159224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.159243 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.263074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.263138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.263155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.263179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.263196 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.366770 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.366830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.366849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.366872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.366890 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.471636 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.471748 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.471769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.471794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.471813 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.517893 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:38:28.167593287 +0000 UTC Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.538590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:02 crc kubenswrapper[4793]: E0217 20:10:02.538797 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.575186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.575240 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.575263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.575294 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.575318 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.678126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.678189 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.678214 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.678246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.678265 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.781350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.781439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.781465 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.781497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.781518 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.884401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.884488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.884515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.884545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.884563 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.898329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.898392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.898411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.898435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.898454 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: E0217 20:10:02.919231 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:02Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.923826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.923898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.923917 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.923944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.923962 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: E0217 20:10:02.941767 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:02Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.947549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.947633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.947651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.947680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.947732 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: E0217 20:10:02.968392 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:02Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.973277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.973327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.973341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.973362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.973379 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:02 crc kubenswrapper[4793]: E0217 20:10:02.992491 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:02Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.998818 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.998866 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.998877 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.998900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:02 crc kubenswrapper[4793]: I0217 20:10:02.998914 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:02Z","lastTransitionTime":"2026-02-17T20:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: E0217 20:10:03.022297 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:03Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:03 crc kubenswrapper[4793]: E0217 20:10:03.022502 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.024602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.024642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.024656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.024681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.024720 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.128415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.128508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.128533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.128570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.128593 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.232082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.232169 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.232194 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.232230 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.232508 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.337506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.337588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.337616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.337649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.337671 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.440635 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.440678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.440750 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.440806 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.440818 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.518539 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:51:22.357830595 +0000 UTC Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.537979 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.538103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:03 crc kubenswrapper[4793]: E0217 20:10:03.538190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.538179 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:03 crc kubenswrapper[4793]: E0217 20:10:03.538522 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:03 crc kubenswrapper[4793]: E0217 20:10:03.538444 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.544864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.544936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.544960 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.544985 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.545002 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.648250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.648326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.648344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.648369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.648388 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.751614 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.751662 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.751677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.751711 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.751726 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.854900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.854969 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.854986 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.855012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.855031 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.957904 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.957958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.957970 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.957987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:03 crc kubenswrapper[4793]: I0217 20:10:03.957998 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:03Z","lastTransitionTime":"2026-02-17T20:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.060862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.060946 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.060961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.060983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.060997 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.163394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.163440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.163450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.163467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.163478 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.266435 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.266480 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.266493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.266511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.266524 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.369127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.369174 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.369186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.369203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.369214 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.472885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.472934 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.472945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.472961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.472974 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.518963 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:06:27.932505165 +0000 UTC Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.538583 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:04 crc kubenswrapper[4793]: E0217 20:10:04.538799 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.575992 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.576034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.576043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.576060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.576071 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.678397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.678447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.678459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.678477 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.678490 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.781451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.781522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.781547 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.781582 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.781609 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.883924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.883972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.883990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.884011 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.884027 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.987014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.987074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.987127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.987154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:04 crc kubenswrapper[4793]: I0217 20:10:04.987170 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:04Z","lastTransitionTime":"2026-02-17T20:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.093808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.093909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.093935 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.093974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.094009 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.197796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.197872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.197896 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.197926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.197946 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.300533 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.300597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.300623 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.300647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.300665 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.404129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.404192 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.404210 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.404235 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.404251 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.507417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.507475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.507493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.507516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.507536 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.519391 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:48:31.985478099 +0000 UTC Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.538094 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.538142 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:05 crc kubenswrapper[4793]: E0217 20:10:05.538306 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.538369 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:05 crc kubenswrapper[4793]: E0217 20:10:05.538503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:05 crc kubenswrapper[4793]: E0217 20:10:05.538659 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.558637 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.576288 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.595772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.611401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.611491 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.611508 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.611535 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.611553 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.612556 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6b025f5-9ffb-43fb-b86f-ae90ffca8eb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0392a7b5708589df63cc28b06b4fa8d5e853138742986663160183821ad654d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.634338 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.653604 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.672923 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.696796 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.713787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.713839 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.713857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.713882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.713901 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.716358 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.735979 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.754763 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.770244 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.803684 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.816590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.816631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.816642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.816659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.816671 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.822147 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.854821 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.880413 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.894288 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.911429 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.918664 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.918714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.918726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.918744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.918755 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:05Z","lastTransitionTime":"2026-02-17T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:05 crc kubenswrapper[4793]: I0217 20:10:05.923904 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:05Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.020892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.020931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.020944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.020972 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.020985 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.124109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.124160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.124198 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.124221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.124238 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.226637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.226684 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.226727 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.226746 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.226761 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.330226 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.330286 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.330323 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.330358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.330383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.433175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.433229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.433247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.433272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.433290 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.520232 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:24:43.725886036 +0000 UTC Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.536560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.536609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.536627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.536650 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.536669 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.538422 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:06 crc kubenswrapper[4793]: E0217 20:10:06.538657 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.539620 4793 scope.go:117] "RemoveContainer" containerID="743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.638862 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.638905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.638919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.638937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.638950 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.740936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.740988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.741005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.741026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.741041 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.843386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.843441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.843451 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.843469 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.843480 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.990186 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.990219 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.990231 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.990246 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:06 crc kubenswrapper[4793]: I0217 20:10:06.990256 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:06Z","lastTransitionTime":"2026-02-17T20:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.019511 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/2.log" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.022257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.023108 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.035006 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.051564 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.062479 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.078942 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.092142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.092237 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.092277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.092300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.092317 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.096368 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.111342 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.126483 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.142730 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.155116 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.172887 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.188039 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.197936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.198036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.198074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.198102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.198123 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.198649 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6b025f5-9ffb-43fb-b86f-ae90ffca8eb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0392a7b5708589df63cc28b06b4fa8d5e853138742986663160183821ad654d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.215139 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.228791 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.242740 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.256534 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.270079 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.282943 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.300508 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:07Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.300590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.300647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.300661 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.300677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.300703 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.403056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.403154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.403176 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.403207 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.403227 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.506788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.506852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.506868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.506892 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.506910 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.521179 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:12:27.702000098 +0000 UTC Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.538213 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.538341 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.538396 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:07 crc kubenswrapper[4793]: E0217 20:10:07.538453 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:07 crc kubenswrapper[4793]: E0217 20:10:07.538558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:07 crc kubenswrapper[4793]: E0217 20:10:07.538678 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.620125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.620188 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.620250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.620284 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.620302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.723975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.724045 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.724062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.724086 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.724103 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.827943 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.828025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.828046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.828068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.828085 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.931106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.931171 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.931193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.931224 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:07 crc kubenswrapper[4793]: I0217 20:10:07.931245 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:07Z","lastTransitionTime":"2026-02-17T20:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.027975 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/3.log" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.029167 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/2.log" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.032914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.032948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.032962 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.032979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.032993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.033281 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" exitCode=1 Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.033347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.033416 4793 scope.go:117] "RemoveContainer" containerID="743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.034354 4793 scope.go:117] "RemoveContainer" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" Feb 17 20:10:08 crc kubenswrapper[4793]: E0217 20:10:08.034735 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.059141 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.074333 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6b025f5-9ffb-43fb-b86f-ae90ffca8eb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0392a7b5708589df63cc28b06b4fa8d5e853138742986663160183821ad654d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.091059 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.110461 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.127738 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.135648 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.135736 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.135760 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.135829 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.135856 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.146551 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.163646 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.179884 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.195225 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.208377 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.235896 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743c88867ec1e5ac6d851ff30fd6847d64369d2862ffe7406e9dcd0248d97d4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:36Z\\\",\\\"message\\\":\\\"e it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:09:36Z is after 2025-08-24T17:21:41Z]\\\\nI0217 20:09:36.727000 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727006 6427 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727011 6427 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-jnwtf in node crc\\\\nI0217 20:09:36.727015 6427 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-jnwtf after 0 failed attempt(s)\\\\nI0217 20:09:36.727020 6427 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-jnwtf\\\\nI0217 20:09:36.727027 6427 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-z9xgq\\\\nI0217 20:09:36.727030 6427 base_network\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:10:07Z\\\",\\\"message\\\":\\\"or/kube-rbac-proxy-crio-crc\\\\nI0217 20:10:07.537304 6870 ovnkube.go:599] Stopped ovnkube\\\\nI0217 20:10:07.537297 6870 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 20:10:07.537417 6870 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 20:10:07.537361 6870 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 20:10:07.537512 6870 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:10:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.238369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.238443 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.238456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.238492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.238508 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.250562 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.271767 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.292861 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.307598 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.323343 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.339866 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.342871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.342899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.342909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.342924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.342936 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.355760 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.367615 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:08Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.446204 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.446234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.446243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.446257 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.446267 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.522284 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:58:00.670134374 +0000 UTC Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.538038 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:08 crc kubenswrapper[4793]: E0217 20:10:08.538250 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.548785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.548832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.548849 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.548871 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.548889 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.651044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.651072 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.651081 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.651094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.651101 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.752757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.752781 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.752788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.752800 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.752808 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.855574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.855627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.855643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.855665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.855707 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.958345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.958376 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.958385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.958397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:08 crc kubenswrapper[4793]: I0217 20:10:08.958406 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:08Z","lastTransitionTime":"2026-02-17T20:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.040444 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/3.log" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.046044 4793 scope.go:117] "RemoveContainer" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.046344 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.061739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.061795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.061813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.061837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.061854 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.065039 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.084122 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.105855 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.130827 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.145617 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6b025f5-9ffb-43fb-b86f-ae90ffca8eb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0392a7b5708589df63cc28b06b4fa8d5e853138742986663160183821ad654d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.165048 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.165337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.165426 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.165449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.165479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.165502 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.192946 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.235437 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.248911 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.261250 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.267307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.267373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.267384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.267399 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.267408 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.273369 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.284563 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.296064 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.317592 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:10:07Z\\\",\\\"message\\\":\\\"or/kube-rbac-proxy-crio-crc\\\\nI0217 20:10:07.537304 6870 ovnkube.go:599] Stopped ovnkube\\\\nI0217 20:10:07.537297 6870 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 20:10:07.537417 6870 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 20:10:07.537361 6870 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 20:10:07.537512 6870 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:10:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.320502 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.320576 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.320599 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.320727 4793 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.320769 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.320757948 +0000 UTC m=+148.612456259 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.320810 4793 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.320833 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.32082581 +0000 UTC m=+148.612524121 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.320921 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.320889232 +0000 UTC m=+148.612587573 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.330179 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.350540 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.369964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.370038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.370054 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.370079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.370092 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.370525 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.391072 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.414870 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:09Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.421322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.421393 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421625 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421669 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421708 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421788 4793 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421806 4793 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421727 4793 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.421888 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.421863865 +0000 UTC m=+148.713562166 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.422039 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.421994828 +0000 UTC m=+148.713693309 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.473279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.473363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.473387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.473416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.473480 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.522964 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:41:25.391809997 +0000 UTC Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.538322 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.538332 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.538352 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.538493 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.538828 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:09 crc kubenswrapper[4793]: E0217 20:10:09.539217 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.576278 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.576608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.576790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.576950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.577080 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.680460 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.680870 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.681116 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.681328 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.681575 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.784267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.784354 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.784367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.784384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.784398 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.887195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.887256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.887273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.887299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.887320 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.990796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.990865 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.990887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.990944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:09 crc kubenswrapper[4793]: I0217 20:10:09.990966 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:09Z","lastTransitionTime":"2026-02-17T20:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.093725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.093834 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.093857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.093893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.093917 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.197553 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.197616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.197637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.197671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.197731 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.300601 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.300738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.300769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.300797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.300832 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.403931 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.404003 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.404020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.404044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.404061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.508791 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.508888 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.508923 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.508948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.508967 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.523246 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:44:11.251697794 +0000 UTC Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.537936 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:10 crc kubenswrapper[4793]: E0217 20:10:10.538167 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.611639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.611734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.611756 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.611780 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.611798 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.714919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.714984 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.715012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.715041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.715061 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.818458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.818515 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.818538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.818567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.818587 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.922419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.922473 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.922489 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.922511 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:10 crc kubenswrapper[4793]: I0217 20:10:10.922528 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:10Z","lastTransitionTime":"2026-02-17T20:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.025213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.025277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.025295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.025317 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.025336 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.129126 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.129203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.129228 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.129261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.129278 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.232787 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.232827 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.232838 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.232855 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.232868 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.335858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.335929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.335950 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.335977 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.335996 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.438272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.438322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.438339 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.438363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.438380 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.524107 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:50:15.79565355 +0000 UTC Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.537924 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.537951 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.538066 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:11 crc kubenswrapper[4793]: E0217 20:10:11.538246 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:11 crc kubenswrapper[4793]: E0217 20:10:11.538363 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:11 crc kubenswrapper[4793]: E0217 20:10:11.538481 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.540767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.540836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.540864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.540891 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.540909 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.643716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.643755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.643773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.643796 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.643814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.747142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.747232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.747255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.747281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.747300 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.850332 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.850390 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.850412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.850437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.850455 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.954016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.954068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.954079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.954097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:11 crc kubenswrapper[4793]: I0217 20:10:11.954108 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:11Z","lastTransitionTime":"2026-02-17T20:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.057243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.057327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.057351 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.057386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.057409 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.160202 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.160266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.160283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.160306 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.160323 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.263101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.263141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.263152 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.263168 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.263183 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.366509 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.366563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.366575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.366593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.366604 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.469423 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.469543 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.469568 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.469597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.469619 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.524597 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:17:36.951633257 +0000 UTC Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.538293 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:12 crc kubenswrapper[4793]: E0217 20:10:12.538458 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.572945 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.573001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.573041 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.573074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.573096 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.675894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.675964 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.675987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.676016 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.676038 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.778009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.778068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.778084 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.778100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.778110 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.880631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.880704 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.880716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.880729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.880738 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.983557 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.983609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.983621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.983639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:12 crc kubenswrapper[4793]: I0217 20:10:12.983653 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:12Z","lastTransitionTime":"2026-02-17T20:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.086021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.086056 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.086064 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.086078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.086092 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.188538 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.188652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.188670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.188726 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.188745 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.291307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.291364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.291381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.291403 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.291420 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.394769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.394843 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.394866 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.394898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.394923 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.412207 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.412250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.412265 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.412281 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.412294 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.427082 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.430548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.430667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.430721 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.430812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.430863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.449353 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.453431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.453470 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.453485 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.453506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.453521 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.467817 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.471250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.471277 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.471287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.471300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.471311 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.486064 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.489651 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.489717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.489735 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.489757 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.489772 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.503594 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:13Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.503810 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.505293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.505329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.505341 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.505358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.505369 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.525675 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:13:20.916716199 +0000 UTC Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.538116 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.538191 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.538211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.538283 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.538462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:13 crc kubenswrapper[4793]: E0217 20:10:13.538526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.608133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.608393 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.608424 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.608452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.608472 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.711640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.711772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.711789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.711813 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.711830 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.814837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.814910 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.814930 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.814954 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.814973 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.916944 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.916983 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.916994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.917008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:13 crc kubenswrapper[4793]: I0217 20:10:13.917018 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:13Z","lastTransitionTime":"2026-02-17T20:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.019640 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.019732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.019759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.019788 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.019811 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.122731 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.122797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.122821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.122846 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.122865 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.226273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.226324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.226343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.226367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.226385 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.329203 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.329290 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.329312 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.329335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.329356 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.432646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.432738 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.432758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.432782 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.432800 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.526209 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:34:31.020462398 +0000 UTC Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.535603 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.535669 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.535716 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.535744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.535764 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.537995 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:14 crc kubenswrapper[4793]: E0217 20:10:14.538375 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.638810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.638869 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.638886 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.638912 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.638929 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.741166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.741232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.741251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.741275 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.741294 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.843858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.843899 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.843911 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.843928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.843942 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.945961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.946008 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.946019 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.946039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:14 crc kubenswrapper[4793]: I0217 20:10:14.946052 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:14Z","lastTransitionTime":"2026-02-17T20:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.048514 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.048587 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.048613 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.048643 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.048666 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.152196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.152260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.152279 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.152307 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.152325 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.255546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.255604 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.255620 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.255644 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.255663 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.358953 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.359017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.359030 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.359051 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.359063 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.461609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.461656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.461671 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.461751 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.461778 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.526735 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:45:01.54636812 +0000 UTC Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.538098 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.538143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:15 crc kubenswrapper[4793]: E0217 20:10:15.538317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.538331 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:15 crc kubenswrapper[4793]: E0217 20:10:15.538416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:15 crc kubenswrapper[4793]: E0217 20:10:15.538493 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.555275 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ztwxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2b13cca-b775-4fc5-8ad8-41bfd70c857c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:09:56Z\\\",\\\"message\\\":\\\"2026-02-17T20:09:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa\\\\n2026-02-17T20:09:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a776b044-a795-4290-99bd-d80c514046aa to /host/opt/cni/bin/\\\\n2026-02-17T20:09:11Z [verbose] multus-daemon started\\\\n2026-02-17T20:09:11Z [verbose] Readiness Indicator file check\\\\n2026-02-17T20:09:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ndkg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ztwxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.564448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.564544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.564563 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.564667 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.564749 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.570207 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bf61cc2856b360d3341089d4472c27f8dcf6c7c55831eea511dcf5a931b7a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.585669 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a786034-a3c6-4693-965a-3bd39bce6caa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c1911a561802d8e92f91e2ca7448754e32dcb5f18cae0906dc070e1d752be38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4f9fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.602587 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.621149 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48cdc9966494a4fb1df14123aaf12571b1d71f9d0349963dd1b3719966abd20c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2071f67c2f1c343d9ad571fe62df1e820675df683aa583fa2563de751f2897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.638772 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61219b87-834d-490d-bf8e-1657a4081739\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f6b9b20c5ff10a449eac5b6396cbcab582a5bbf692d7c9ac880e3cdfa3a63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0354e8af9e4fc041d4970e36622a28d09cdf3b13d0e62f3a2e26414a02b8ed11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e6d03c38b25d9455f79d037875ed2da8ab57d1e5c9a8d15e57c76f1dcc0ab6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eae6571b15e63fc75dac86120ba1e359a4763968fcd5e12159b648c13166a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43cff5a593c18a304d3959e77f0c737c865828600035104004bfd2cb82494937\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57bb0575dd8e0d7a4b80733b568105cd893989bb9ffc94c4fefe90a2dfe06b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a556ed90653e74034af578772bca81976205e4bba300489eb17d80cdd84989f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d6vm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kpl4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.658734 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6b025f5-9ffb-43fb-b86f-ae90ffca8eb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0392a7b5708589df63cc28b06b4fa8d5e853138742986663160183821ad654d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b29ead32fd49e8340218337140e825fcfa624a00b777951da99b492b5567d30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.667258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.667330 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.667352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.667375 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.667389 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.680845 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18cad772-a024-49d5-8204-560587b056eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df6bf4bf0fcbe5648621dada82af560bd7eac65ce800c8974c59d777f3dc6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe1dc649f7ca667a9fa57e0f96921b449396310fdec0d23dd39d5695ff3c0e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cee203033ea02d7ee023522404b8632f64c130e254a84ee96c37222561e4ac1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.700568 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b672111-22f9-4dd8-b116-385907278ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39feb0d026694f42ac0934ebdf1749f25d62c1fec2ae68e9af9189fb4cbfe99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7694ca2ced8356ef8dc44687cd124ebd50f8c2916a8c27619b2ab970bcb31d5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2j42l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cl2v7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.720229 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6trvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q9nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6trvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.733609 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.746742 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.759116 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6af04421-9450-4d4d-b153-947396a3632b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8527c428573648481388a1d99e88a9c73658862866d5f6e36691a0b36868c7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b96eb4ff437a9dba392060ab762c6d87450a24f0dcc50ef6f90dfa54aa66e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://958dffa9d0fbd97a0ee01a79d9623a298e9ce287b43e1347b4edeb0cfe175ad9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24e25a10a183950aa779c83aae5f4fe88a4e27d2ff7bad883e1e222400a16b44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.771035 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.771092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.771115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.771146 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.771168 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.773596 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a159b0be901199a25ed9008e15370b1a4bf1858a4b20bfb4b6b7736f1f3d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.787103 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9pkqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db67d891-29db-4ee4-a70c-624cb9af6677\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d09e75a7c6bcf350e2d5833b1a9b6679042aa5d6699c1194c32320c5f0c8d208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv9g8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9pkqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.809027 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T20:10:07Z\\\",\\\"message\\\":\\\"or/kube-rbac-proxy-crio-crc\\\\nI0217 20:10:07.537304 6870 ovnkube.go:599] Stopped ovnkube\\\\nI0217 20:10:07.537297 6870 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]} options:{GoMap:map[iface-id-ver:9d751cbb-f2e2-430d-9754-c882a5e924a5 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:3b 10.217.0.59]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {960d98b2-dc64-4e93-a4b6-9b19847af71e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 20:10:07.537417 6870 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 20:10:07.537361 6870 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 20:10:07.537512 6870 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:10:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4qp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n2fmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.820640 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z9xgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3073cc3b-c430-444d-b751-5e1416feafa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51236e50a5c628e38556b1fc91434f66c7a2e8885a331b358825bcea2c3d9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:09:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z9xgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.844112 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c8d46b-bddf-41fd-bfe9-829889b911c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3405184556f0b16f2610e42360345bb094d79fe3d713f9dec7c6d28a5124851e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d8c33489e77add820819b72d67bd6eca73a11e3fd4a99ae4c5c0136f6257c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a42458e37611ca7ec204f13fd4703a5b1e01189a162f0180be137d614a0379\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982055260eb2f38b638b6ee268993806f9a0bf999f21b065659a64144394bc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16cf925bf79279e3553a50dd773c3e6a88f3e8e86837c457c43f1b82c6fd291b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf3858594c7c3d944d3b5d06f8e9203a9d6a1d83a00966f1790092bc1d562f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2a9a42faa5c711500021559023be50e1150853f00c775bedd12c850e4625e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9c96b743da9a0878f1e0f373de9adb031f3587bbfa08e97bba86bc5ccad0e01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.864839 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T20:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T20:08:59Z\\\",\\\"message\\\":\\\"W0217 20:08:48.786864 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 20:08:48.787257 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771358928 cert, and key in /tmp/serving-cert-1841715996/serving-signer.crt, /tmp/serving-cert-1841715996/serving-signer.key\\\\nI0217 20:08:49.003324 1 observer_polling.go:159] Starting file observer\\\\nW0217 20:08:49.008336 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 20:08:49.008526 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 20:08:49.009879 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1841715996/tls.crt::/tmp/serving-cert-1841715996/tls.key\\\\\\\"\\\\nF0217 20:08:59.398239 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:08:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:08:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:08:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T20:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:15Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.873979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.874023 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.874039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.874060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.874075 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.977107 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.977158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.977175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.977196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:15 crc kubenswrapper[4793]: I0217 20:10:15.977213 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:15Z","lastTransitionTime":"2026-02-17T20:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.080486 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.080556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.080580 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.080609 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.080631 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.184049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.184115 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.184138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.184166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.184189 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.286196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.286238 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.286251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.286268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.286280 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.388885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.388951 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.388965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.388981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.388993 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.491857 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.491905 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.491918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.491936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.491947 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.527054 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:06:13.566627972 +0000 UTC Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.538537 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:16 crc kubenswrapper[4793]: E0217 20:10:16.538746 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.594785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.594835 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.594847 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.594864 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.594877 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.698362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.698416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.698432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.698454 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.698469 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.802053 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.802087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.802100 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.802118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.802129 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.903551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.903583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.903593 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.903606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:16 crc kubenswrapper[4793]: I0217 20:10:16.903615 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:16Z","lastTransitionTime":"2026-02-17T20:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.006387 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.006456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.006476 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.006500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.006519 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.109549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.109594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.109608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.109624 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.109635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.212468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.212531 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.212545 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.212562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.212574 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.316136 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.316213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.316236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.316268 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.316290 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.419046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.419104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.419135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.419159 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.419175 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.522335 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.522394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.522411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.522432 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.522450 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.527795 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:40:44.056334991 +0000 UTC Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.538031 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:17 crc kubenswrapper[4793]: E0217 20:10:17.538288 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.538778 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.538792 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:17 crc kubenswrapper[4793]: E0217 20:10:17.539076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:17 crc kubenswrapper[4793]: E0217 20:10:17.539365 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.625520 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.625588 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.625606 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.625628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.625645 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.729179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.729249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.729266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.729299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.729325 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.832134 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.832242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.832263 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.832625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.832861 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.935908 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.935958 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.935974 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.936013 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:17 crc kubenswrapper[4793]: I0217 20:10:17.936029 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:17Z","lastTransitionTime":"2026-02-17T20:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.039285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.039326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.039337 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.039352 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.039365 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.142104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.142242 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.142272 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.142300 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.142323 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.245421 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.245501 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.245519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.245544 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.245563 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.349047 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.349109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.349127 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.349153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.349175 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.452299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.452364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.452386 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.452410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.452427 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.528308 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:03:51.281213116 +0000 UTC Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.538719 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:18 crc kubenswrapper[4793]: E0217 20:10:18.538904 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.556792 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.556842 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.556860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.556882 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.556900 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.660812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.660884 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.660902 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.660929 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.660948 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.763633 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.763734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.763763 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.763794 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.763815 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.867295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.867357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.867374 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.867397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.867413 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.970579 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.970639 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.970659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.970683 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:18 crc kubenswrapper[4793]: I0217 20:10:18.970741 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:18Z","lastTransitionTime":"2026-02-17T20:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.073229 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.073320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.073344 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.073373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.073394 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.175927 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.175976 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.175996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.176031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.176048 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.279394 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.279455 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.279494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.279525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.279545 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.382924 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.383004 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.383022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.383046 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.383064 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.486283 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.486397 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.486419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.486446 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.486466 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.529385 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:27:32.3799146 +0000 UTC Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.537910 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.538042 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.538133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:19 crc kubenswrapper[4793]: E0217 20:10:19.538132 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:19 crc kubenswrapper[4793]: E0217 20:10:19.538281 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:19 crc kubenswrapper[4793]: E0217 20:10:19.538408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.589769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.589840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.589856 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.589880 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.589898 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.692762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.692816 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.692828 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.692848 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.692860 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.796001 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.796060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.796082 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.796111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.796139 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.899095 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.899321 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.899343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.899367 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:19 crc kubenswrapper[4793]: I0217 20:10:19.899383 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:19Z","lastTransitionTime":"2026-02-17T20:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.001808 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.001873 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.001890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.001914 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.001935 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.105062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.105142 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.105158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.105184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.105199 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.208447 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.208560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.208586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.208616 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.208639 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.311918 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.311981 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.311997 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.312020 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.312037 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.415118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.415236 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.415262 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.415291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.415312 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.518384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.518445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.518462 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.518488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.518505 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.530007 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:00:59.751972787 +0000 UTC Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.538549 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:20 crc kubenswrapper[4793]: E0217 20:10:20.538810 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.622141 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.622208 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.622225 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.622254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.622273 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.725191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.725250 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.725267 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.725291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.725308 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.828259 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.828318 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.828334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.828357 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.828374 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.930724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.930786 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.930807 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.930837 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:20 crc kubenswrapper[4793]: I0217 20:10:20.930859 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:20Z","lastTransitionTime":"2026-02-17T20:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.034060 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.034119 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.034137 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.034167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.034186 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.136170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.136227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.136251 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.136280 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.136302 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.239647 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.239722 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.239737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.239761 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.239776 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.342391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.342440 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.342450 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.342474 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.342485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.444448 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.444823 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.444836 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.444852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.444863 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.530595 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:45:28.854183786 +0000 UTC Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.537918 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.537976 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.537921 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:21 crc kubenswrapper[4793]: E0217 20:10:21.538122 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:21 crc kubenswrapper[4793]: E0217 20:10:21.538203 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:21 crc kubenswrapper[4793]: E0217 20:10:21.538354 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.546989 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.547025 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.547036 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.547052 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.547062 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.649979 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.650017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.650027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.650071 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.650082 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.753586 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.753637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.753654 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.753677 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.753727 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.856546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.856608 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.856627 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.856656 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.856673 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.959183 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.959234 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.959249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.959271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:21 crc kubenswrapper[4793]: I0217 20:10:21.959289 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:21Z","lastTransitionTime":"2026-02-17T20:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.062678 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.062768 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.062789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.062821 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.062844 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.165883 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.165978 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.165996 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.166021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.166039 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.269583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.269657 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.269679 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.269741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.269764 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.373926 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.373990 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.374007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.374038 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.374076 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.477442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.477532 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.477556 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.477589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.477612 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.531011 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:49:34.104929321 +0000 UTC Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.538476 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:22 crc kubenswrapper[4793]: E0217 20:10:22.538767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.581534 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.581590 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.581607 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.581626 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.581640 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.684249 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.684326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.684350 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.684417 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.684439 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.787755 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.787832 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.787851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.787885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.787909 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.891232 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.891303 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.891326 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.891363 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.891386 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.993909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.994155 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.994170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.994191 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:22 crc kubenswrapper[4793]: I0217 20:10:22.994205 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:22Z","lastTransitionTime":"2026-02-17T20:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.100830 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.101093 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.101110 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.101135 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.101152 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.204196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.204255 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.204271 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.204293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.204309 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.307340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.307381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.307392 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.307408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.307446 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.410012 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.410088 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.410111 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.410140 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.410161 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.512559 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.512632 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.512649 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.512673 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.512720 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.532046 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:53:23.505157324 +0000 UTC Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.538485 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.538486 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.539199 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.539311 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.539372 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.539452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.539579 4793 scope.go:117] "RemoveContainer" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.539767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.615220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.615258 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.615269 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.615285 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.615297 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.629039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.629078 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.629089 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.629104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.629114 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.644812 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.649299 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.649362 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.649385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.649414 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.649435 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.667263 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.671812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.671859 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.671876 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.671898 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.671914 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.685923 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.690000 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.690031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.690042 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.690057 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.690091 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.705548 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.709133 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.709179 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.709195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.709213 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.709228 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.728864 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T20:10:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9899edd8-40d9-4d30-b914-dcde4645fb8b\\\",\\\"systemUUID\\\":\\\"6761f953-4396-4ffc-8ccb-0bad99a4cc8e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T20:10:23Z is after 2025-08-24T17:21:41Z" Feb 17 20:10:23 crc kubenswrapper[4793]: E0217 20:10:23.729129 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.731215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.731256 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.731273 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.731295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.731313 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.834536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.834589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.834611 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.834638 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.834660 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.937062 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.937355 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.937458 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.937560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:23 crc kubenswrapper[4793]: I0217 20:10:23.937654 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:23Z","lastTransitionTime":"2026-02-17T20:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.040512 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.040546 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.040555 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.040567 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.040576 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.142868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.142907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.142915 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.142928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.142938 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.246295 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.246369 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.246385 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.246412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.246430 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.349027 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.349097 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.349118 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.349145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.349167 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.451789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.451851 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.451868 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.451893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.451910 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.532748 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:26:47.954432867 +0000 UTC Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.538051 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:24 crc kubenswrapper[4793]: E0217 20:10:24.539398 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.554682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.554772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.554790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.554811 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.554827 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.657741 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.657814 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.657858 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.657897 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.657938 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.760682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.760812 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.760833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.760860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.760880 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.863734 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.863805 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.863817 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.863833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.863845 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.966988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.967049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.967068 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.967094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:24 crc kubenswrapper[4793]: I0217 20:10:24.967113 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:24Z","lastTransitionTime":"2026-02-17T20:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.070398 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.070497 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.070517 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.070540 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.070557 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.174324 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.174380 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.174402 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.174425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.174443 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.277296 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.277373 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.277416 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.277442 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.277461 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.380180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.380261 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.380287 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.380320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.380340 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.483425 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.483467 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.483479 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.483493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.483503 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.533508 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:36:13.027651364 +0000 UTC Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.538068 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:25 crc kubenswrapper[4793]: E0217 20:10:25.538503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.538173 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.538151 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:25 crc kubenswrapper[4793]: E0217 20:10:25.539305 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:25 crc kubenswrapper[4793]: E0217 20:10:25.539099 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.584390 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kpl4r" podStartSLOduration=77.584361848 podStartE2EDuration="1m17.584361848s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.58363291 +0000 UTC m=+100.875331251" watchObservedRunningTime="2026-02-17 20:10:25.584361848 +0000 UTC m=+100.876060209" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.588482 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.588548 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.588570 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.588600 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.588622 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.616938 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.616905942 podStartE2EDuration="25.616905942s" podCreationTimestamp="2026-02-17 20:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.599982084 +0000 UTC m=+100.891680395" watchObservedRunningTime="2026-02-17 20:10:25.616905942 +0000 UTC m=+100.908604303" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.634289 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.63426809 podStartE2EDuration="1m20.63426809s" podCreationTimestamp="2026-02-17 20:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.61847956 +0000 UTC m=+100.910177881" watchObservedRunningTime="2026-02-17 20:10:25.63426809 +0000 UTC m=+100.925966411" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.690975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.691007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.691017 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.691029 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.691039 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.713775 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cl2v7" podStartSLOduration=76.713754803 podStartE2EDuration="1m16.713754803s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.69340081 +0000 UTC m=+100.985099141" watchObservedRunningTime="2026-02-17 20:10:25.713754803 +0000 UTC m=+101.005453134" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.754658 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9pkqd" podStartSLOduration=77.754632873 podStartE2EDuration="1m17.754632873s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.725093363 +0000 UTC m=+101.016791694" watchObservedRunningTime="2026-02-17 20:10:25.754632873 +0000 UTC m=+101.046331194" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.792988 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.793032 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.793043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.793059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.793071 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.797012 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=80.796990478 podStartE2EDuration="1m20.796990478s" podCreationTimestamp="2026-02-17 20:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.796304152 +0000 UTC m=+101.088002503" watchObservedRunningTime="2026-02-17 20:10:25.796990478 +0000 UTC m=+101.088688829" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.798295 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-z9xgq" podStartSLOduration=77.79828546 podStartE2EDuration="1m17.79828546s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.769747916 +0000 UTC m=+101.061446247" watchObservedRunningTime="2026-02-17 20:10:25.79828546 +0000 UTC m=+101.089983811" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.812776 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.812758248 podStartE2EDuration="1m20.812758248s" podCreationTimestamp="2026-02-17 20:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.812491111 +0000 UTC m=+101.104189432" watchObservedRunningTime="2026-02-17 20:10:25.812758248 +0000 UTC m=+101.104456569" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.826959 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.826933898 podStartE2EDuration="50.826933898s" podCreationTimestamp="2026-02-17 20:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.825026251 +0000 UTC m=+101.116724562" watchObservedRunningTime="2026-02-17 20:10:25.826933898 +0000 UTC m=+101.118632219" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.854874 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podStartSLOduration=77.854856047 podStartE2EDuration="1m17.854856047s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.854374716 +0000 UTC m=+101.146073047" watchObservedRunningTime="2026-02-17 20:10:25.854856047 +0000 UTC m=+101.146554358" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.872818 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ztwxl" podStartSLOduration=77.87278696 podStartE2EDuration="1m17.87278696s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:25.871369915 +0000 UTC m=+101.163068236" watchObservedRunningTime="2026-02-17 20:10:25.87278696 +0000 UTC m=+101.164485311" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.895320 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.895370 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.895384 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.895431 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.895446 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.998438 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.998519 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.998542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.998574 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:25 crc kubenswrapper[4793]: I0217 20:10:25.998595 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:25Z","lastTransitionTime":"2026-02-17T20:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.101487 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.101536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.101552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.101575 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.101591 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.204900 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.205065 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.205094 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.205180 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.205254 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.308459 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.308549 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.308584 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.308618 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.308635 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.411739 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.411822 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.411845 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.411872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.411892 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.514092 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.514129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.514138 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.514153 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.514162 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.533956 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:03:34.731310298 +0000 UTC Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.538371 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:26 crc kubenswrapper[4793]: E0217 20:10:26.538597 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.616642 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.616758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.616776 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.616804 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.616824 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.719948 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.720044 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.720079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.720109 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.720132 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.823184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.823274 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.823293 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.823327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.823346 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.926995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.927069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.927087 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.927112 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:26 crc kubenswrapper[4793]: I0217 20:10:26.927131 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:26Z","lastTransitionTime":"2026-02-17T20:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.030129 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.030195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.030212 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.030243 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.030261 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.132445 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.132539 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.132560 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.132589 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.132608 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.209871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:27 crc kubenswrapper[4793]: E0217 20:10:27.210125 4793 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:10:27 crc kubenswrapper[4793]: E0217 20:10:27.210252 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs podName:0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd nodeName:}" failed. No retries permitted until 2026-02-17 20:11:31.210216345 +0000 UTC m=+166.501914736 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs") pod "network-metrics-daemon-6trvs" (UID: "0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.236074 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.236143 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.236165 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.236195 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.236219 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.339160 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.339218 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.339241 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.339266 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.339284 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.442566 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.442641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.442665 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.442725 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.442748 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.535049 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:16:54.350115601 +0000 UTC Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.538628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.538738 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.538821 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:27 crc kubenswrapper[4793]: E0217 20:10:27.539058 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:27 crc kubenswrapper[4793]: E0217 20:10:27.539175 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:27 crc kubenswrapper[4793]: E0217 20:10:27.539359 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.546053 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.546102 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.546121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.546147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.546213 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.649005 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.649103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.649121 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.649175 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.649194 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.751998 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.752055 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.752104 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.752123 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.752137 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.855619 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.855680 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.855729 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.855754 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.855775 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.957825 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.957879 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.957890 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.957909 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:27 crc kubenswrapper[4793]: I0217 20:10:27.957923 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:27Z","lastTransitionTime":"2026-02-17T20:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.060947 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.061014 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.061034 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.061059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.061078 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.164833 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.164887 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.164901 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.164921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.164934 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.268227 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.268302 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.268322 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.268346 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.268360 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.372091 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.372166 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.372190 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.372220 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.372242 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.475048 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.475108 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.475130 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.475158 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.475178 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.535504 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:11:44.551966905 +0000 UTC Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.537815 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:28 crc kubenswrapper[4793]: E0217 20:10:28.538011 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.578903 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.578975 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.578987 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.579009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.579040 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.681957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.682022 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.682043 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.682069 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.682091 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.785221 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.785297 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.785314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.785340 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.785358 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.887752 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.887826 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.887840 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.887860 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.887875 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.990484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.990525 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.990536 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.990550 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:28 crc kubenswrapper[4793]: I0217 20:10:28.990561 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:28Z","lastTransitionTime":"2026-02-17T20:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.093724 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.093759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.093771 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.093785 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.093793 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.197358 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.197411 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.197428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.197456 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.197473 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.300347 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.300381 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.300391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.300409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.300422 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.403408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.403484 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.403506 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.403552 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.403577 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.507031 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.507106 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.507125 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.507148 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.507166 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.535865 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:57:44.969106967 +0000 UTC Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.538182 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.538298 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.538202 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:29 crc kubenswrapper[4793]: E0217 20:10:29.538424 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:29 crc kubenswrapper[4793]: E0217 20:10:29.538484 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:29 crc kubenswrapper[4793]: E0217 20:10:29.538565 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.610802 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.610872 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.610894 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.610923 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.610946 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.714007 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.714063 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.714080 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.714103 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.714121 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.816516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.816562 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.816577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.816595 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.816606 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.919342 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.919405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.919422 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.919449 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:29 crc kubenswrapper[4793]: I0217 20:10:29.919466 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:29Z","lastTransitionTime":"2026-02-17T20:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.022254 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.022291 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.022309 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.022329 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.022343 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.125010 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.125147 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.125167 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.125193 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.125210 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.228026 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.228075 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.228085 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.228101 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.228112 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.331327 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.331377 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.331391 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.331410 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.331426 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.434405 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.434468 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.434478 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.434494 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.434507 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.536367 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:11:14.643956315 +0000 UTC Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.537704 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.537720 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.537758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.537772 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.537789 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.537801 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: E0217 20:10:30.537813 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.641311 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.641353 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.641364 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.641379 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.641391 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.743844 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.743885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.743893 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.743907 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.743917 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.847345 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.847408 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.847418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.847439 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.847456 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.950247 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.950314 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.950334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.950360 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:30 crc kubenswrapper[4793]: I0217 20:10:30.950376 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:30Z","lastTransitionTime":"2026-02-17T20:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.053349 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.053428 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.053452 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.053486 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.053514 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.156861 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.156942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.156966 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.156995 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.157016 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.263660 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.263773 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.263810 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.263852 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.263878 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.367336 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.367401 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.367415 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.367441 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.367459 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.470500 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.470573 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.470597 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.470630 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.470655 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.537345 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:26:33.763766964 +0000 UTC Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.538785 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.538839 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.538806 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:31 crc kubenswrapper[4793]: E0217 20:10:31.538957 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:31 crc kubenswrapper[4793]: E0217 20:10:31.539108 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:31 crc kubenswrapper[4793]: E0217 20:10:31.539240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.573343 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.573412 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.573437 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.573464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.573485 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.676522 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.676577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.676594 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.676617 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.676633 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.779959 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.780009 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.780021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.780040 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.780052 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.883049 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.883145 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.883170 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.883196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.883216 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.985419 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.985498 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.985524 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.985551 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:31 crc kubenswrapper[4793]: I0217 20:10:31.985568 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:31Z","lastTransitionTime":"2026-02-17T20:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.088418 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.088475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.088492 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.088518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.088534 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.191670 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.191758 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.191769 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.191790 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.191803 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.294994 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.295050 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.295059 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.295079 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.295089 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.397409 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.397475 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.397493 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.397516 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.397533 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.500863 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.500921 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.500937 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.500961 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.500980 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.538593 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.538586 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:00:01.028221365 +0000 UTC Feb 17 20:10:32 crc kubenswrapper[4793]: E0217 20:10:32.538810 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.603641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.603705 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.603717 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.603737 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.603749 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.706967 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.707039 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.707061 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.707090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.707113 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.810583 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.810621 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.810631 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.810646 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.810657 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.913260 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.913334 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.913348 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.913365 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:32 crc kubenswrapper[4793]: I0217 20:10:32.913398 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:32Z","lastTransitionTime":"2026-02-17T20:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.016382 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.016464 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.016488 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.016518 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.016540 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.119637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.119681 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.119714 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.119732 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.119742 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.222682 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.222733 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.222744 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.222759 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.222770 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.325542 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.325602 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.325625 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.325652 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.325675 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.428090 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.428154 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.428187 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.428217 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.428240 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.531637 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.531753 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.531774 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.531797 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.531814 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.538327 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.538363 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.538519 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:33 crc kubenswrapper[4793]: E0217 20:10:33.538518 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:33 crc kubenswrapper[4793]: E0217 20:10:33.538652 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:33 crc kubenswrapper[4793]: E0217 20:10:33.538833 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.538894 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:54:08.18290118 +0000 UTC Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.634762 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.635723 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.635767 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.635795 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.635809 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.739146 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.739184 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.739196 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.739215 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.739253 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.841577 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.841628 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.841641 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.841659 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.841673 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.943885 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.943919 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.943928 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.943942 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.943951 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.950936 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.950957 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.950965 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.951021 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.951048 4793 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T20:10:33Z","lastTransitionTime":"2026-02-17T20:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.993630 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv"] Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.993991 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.998119 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.999064 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.999174 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 20:10:33 crc kubenswrapper[4793]: I0217 20:10:33.999251 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.075662 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.075757 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.075823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.075878 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.075958 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.176932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.176971 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.177008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.177025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.177055 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.177215 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.177291 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.181235 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.183960 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.203489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87710ef-f77f-4ed2-ba1d-df8d4421d03e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hxwmv\" (UID: \"b87710ef-f77f-4ed2-ba1d-df8d4421d03e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.319499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.537916 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:34 crc kubenswrapper[4793]: E0217 20:10:34.538101 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.540116 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:51:19.275105093 +0000 UTC Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.540184 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 20:10:34 crc kubenswrapper[4793]: I0217 20:10:34.551006 4793 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 20:10:35 crc kubenswrapper[4793]: I0217 20:10:35.140258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" event={"ID":"b87710ef-f77f-4ed2-ba1d-df8d4421d03e","Type":"ContainerStarted","Data":"b9226097f3d5702589527424a693e13847496bb8b22266ab503fe5edb92ecd3f"} Feb 17 20:10:35 crc kubenswrapper[4793]: I0217 20:10:35.140334 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" event={"ID":"b87710ef-f77f-4ed2-ba1d-df8d4421d03e","Type":"ContainerStarted","Data":"61590f775614cfb408f21074669c14b4ecbe5dce97d27b8074045d2711fc8076"} Feb 17 20:10:35 crc kubenswrapper[4793]: I0217 20:10:35.163007 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hxwmv" podStartSLOduration=87.162975238 podStartE2EDuration="1m27.162975238s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:35.16023366 +0000 UTC m=+110.451932021" watchObservedRunningTime="2026-02-17 20:10:35.162975238 +0000 UTC m=+110.454673599" Feb 17 20:10:35 crc kubenswrapper[4793]: I0217 20:10:35.539495 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:35 crc kubenswrapper[4793]: E0217 20:10:35.539829 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:35 crc kubenswrapper[4793]: I0217 20:10:35.540494 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:35 crc kubenswrapper[4793]: E0217 20:10:35.540606 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:35 crc kubenswrapper[4793]: I0217 20:10:35.541061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:35 crc kubenswrapper[4793]: E0217 20:10:35.541179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:36 crc kubenswrapper[4793]: I0217 20:10:36.538065 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:36 crc kubenswrapper[4793]: E0217 20:10:36.538252 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:37 crc kubenswrapper[4793]: I0217 20:10:37.538392 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:37 crc kubenswrapper[4793]: E0217 20:10:37.538794 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:37 crc kubenswrapper[4793]: I0217 20:10:37.538498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:37 crc kubenswrapper[4793]: I0217 20:10:37.538456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:37 crc kubenswrapper[4793]: E0217 20:10:37.538928 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:37 crc kubenswrapper[4793]: E0217 20:10:37.539131 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:38 crc kubenswrapper[4793]: I0217 20:10:38.538553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:38 crc kubenswrapper[4793]: E0217 20:10:38.539240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:38 crc kubenswrapper[4793]: I0217 20:10:38.539647 4793 scope.go:117] "RemoveContainer" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" Feb 17 20:10:38 crc kubenswrapper[4793]: E0217 20:10:38.539936 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n2fmv_openshift-ovn-kubernetes(4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" Feb 17 20:10:39 crc kubenswrapper[4793]: I0217 20:10:39.537981 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:39 crc kubenswrapper[4793]: E0217 20:10:39.538431 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:39 crc kubenswrapper[4793]: I0217 20:10:39.538125 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:39 crc kubenswrapper[4793]: E0217 20:10:39.538897 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:39 crc kubenswrapper[4793]: I0217 20:10:39.538124 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:39 crc kubenswrapper[4793]: E0217 20:10:39.539275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:40 crc kubenswrapper[4793]: I0217 20:10:40.537856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:40 crc kubenswrapper[4793]: E0217 20:10:40.538052 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:41 crc kubenswrapper[4793]: I0217 20:10:41.538432 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:41 crc kubenswrapper[4793]: I0217 20:10:41.538503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:41 crc kubenswrapper[4793]: I0217 20:10:41.538643 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:41 crc kubenswrapper[4793]: E0217 20:10:41.538645 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:41 crc kubenswrapper[4793]: E0217 20:10:41.538829 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:41 crc kubenswrapper[4793]: E0217 20:10:41.538989 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:42 crc kubenswrapper[4793]: I0217 20:10:42.538056 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:42 crc kubenswrapper[4793]: E0217 20:10:42.538257 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.177006 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/1.log" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.178229 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/0.log" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.178329 4793 generic.go:334] "Generic (PLEG): container finished" podID="b2b13cca-b775-4fc5-8ad8-41bfd70c857c" containerID="beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae" exitCode=1 Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.178380 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerDied","Data":"beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae"} Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.178437 4793 scope.go:117] "RemoveContainer" containerID="0c5b745e67f1da59c6b1df97c69bfcbfdc00f4b51a6788751079465a5ce6c177" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.178862 4793 scope.go:117] "RemoveContainer" containerID="beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae" Feb 17 20:10:43 crc kubenswrapper[4793]: E0217 20:10:43.179011 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ztwxl_openshift-multus(b2b13cca-b775-4fc5-8ad8-41bfd70c857c)\"" pod="openshift-multus/multus-ztwxl" podUID="b2b13cca-b775-4fc5-8ad8-41bfd70c857c" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.538602 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.538740 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:43 crc kubenswrapper[4793]: E0217 20:10:43.538809 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:43 crc kubenswrapper[4793]: I0217 20:10:43.538733 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:43 crc kubenswrapper[4793]: E0217 20:10:43.538919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:43 crc kubenswrapper[4793]: E0217 20:10:43.539128 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:44 crc kubenswrapper[4793]: I0217 20:10:44.185869 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/1.log" Feb 17 20:10:44 crc kubenswrapper[4793]: I0217 20:10:44.538515 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:44 crc kubenswrapper[4793]: E0217 20:10:44.538633 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:45 crc kubenswrapper[4793]: E0217 20:10:45.484064 4793 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 20:10:45 crc kubenswrapper[4793]: I0217 20:10:45.537980 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:45 crc kubenswrapper[4793]: I0217 20:10:45.538083 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:45 crc kubenswrapper[4793]: E0217 20:10:45.538246 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:45 crc kubenswrapper[4793]: E0217 20:10:45.538497 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:45 crc kubenswrapper[4793]: I0217 20:10:45.538518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:45 crc kubenswrapper[4793]: E0217 20:10:45.538611 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:45 crc kubenswrapper[4793]: E0217 20:10:45.653675 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 20:10:46 crc kubenswrapper[4793]: I0217 20:10:46.538629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:46 crc kubenswrapper[4793]: E0217 20:10:46.538897 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:47 crc kubenswrapper[4793]: I0217 20:10:47.537796 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:47 crc kubenswrapper[4793]: I0217 20:10:47.537796 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:47 crc kubenswrapper[4793]: E0217 20:10:47.538049 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:47 crc kubenswrapper[4793]: I0217 20:10:47.537823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:47 crc kubenswrapper[4793]: E0217 20:10:47.538134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:47 crc kubenswrapper[4793]: E0217 20:10:47.538282 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:48 crc kubenswrapper[4793]: I0217 20:10:48.538082 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:48 crc kubenswrapper[4793]: E0217 20:10:48.538280 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:49 crc kubenswrapper[4793]: I0217 20:10:49.538652 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:49 crc kubenswrapper[4793]: I0217 20:10:49.538777 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:49 crc kubenswrapper[4793]: E0217 20:10:49.538828 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:49 crc kubenswrapper[4793]: I0217 20:10:49.538947 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:49 crc kubenswrapper[4793]: E0217 20:10:49.538941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:49 crc kubenswrapper[4793]: E0217 20:10:49.539043 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:50 crc kubenswrapper[4793]: I0217 20:10:50.538652 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:50 crc kubenswrapper[4793]: E0217 20:10:50.538886 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:50 crc kubenswrapper[4793]: I0217 20:10:50.539977 4793 scope.go:117] "RemoveContainer" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" Feb 17 20:10:50 crc kubenswrapper[4793]: E0217 20:10:50.655226 4793 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.209985 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/3.log" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.212671 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerStarted","Data":"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65"} Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.213526 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.246156 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podStartSLOduration=103.246132982 podStartE2EDuration="1m43.246132982s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:10:51.243832106 +0000 UTC m=+126.535530417" watchObservedRunningTime="2026-02-17 20:10:51.246132982 +0000 UTC m=+126.537831313" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.528739 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6trvs"] Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.528854 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:51 crc kubenswrapper[4793]: E0217 20:10:51.528937 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.537739 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:51 crc kubenswrapper[4793]: E0217 20:10:51.537836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.537988 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:51 crc kubenswrapper[4793]: E0217 20:10:51.538028 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:51 crc kubenswrapper[4793]: I0217 20:10:51.538589 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:51 crc kubenswrapper[4793]: E0217 20:10:51.538643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:53 crc kubenswrapper[4793]: I0217 20:10:53.537928 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:53 crc kubenswrapper[4793]: I0217 20:10:53.538010 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:53 crc kubenswrapper[4793]: E0217 20:10:53.538144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:53 crc kubenswrapper[4793]: I0217 20:10:53.538217 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:53 crc kubenswrapper[4793]: E0217 20:10:53.538411 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:53 crc kubenswrapper[4793]: I0217 20:10:53.538228 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:53 crc kubenswrapper[4793]: I0217 20:10:53.538501 4793 scope.go:117] "RemoveContainer" containerID="beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae" Feb 17 20:10:53 crc kubenswrapper[4793]: E0217 20:10:53.538553 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:53 crc kubenswrapper[4793]: E0217 20:10:53.539907 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:54 crc kubenswrapper[4793]: I0217 20:10:54.226229 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/1.log" Feb 17 20:10:54 crc kubenswrapper[4793]: I0217 20:10:54.226724 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerStarted","Data":"98e9297cdbc34f5f60848019a91aaf3e79513d8c064e8b32e68ac7bb740e80ad"} Feb 17 20:10:55 crc kubenswrapper[4793]: I0217 20:10:55.538553 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:55 crc kubenswrapper[4793]: I0217 20:10:55.538640 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:55 crc kubenswrapper[4793]: I0217 20:10:55.538830 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:55 crc kubenswrapper[4793]: I0217 20:10:55.538825 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:55 crc kubenswrapper[4793]: E0217 20:10:55.540316 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 20:10:55 crc kubenswrapper[4793]: E0217 20:10:55.540680 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6trvs" podUID="0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd" Feb 17 20:10:55 crc kubenswrapper[4793]: E0217 20:10:55.540873 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 20:10:55 crc kubenswrapper[4793]: E0217 20:10:55.541018 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.538732 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.538784 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.538995 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.539262 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.542613 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.542738 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.545080 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.545934 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.546033 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 20:10:57 crc kubenswrapper[4793]: I0217 20:10:57.547354 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.632209 4793 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.688285 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wr4p8"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.690044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.692058 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b9fhb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.692792 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.693110 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lzd99"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.693709 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.693747 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.695348 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mr259"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.696031 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.696129 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.696540 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.696618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.696771 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.697900 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bgzw"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.698931 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.700366 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.702014 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8rb9m"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.702715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.704465 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.704917 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.705005 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.705417 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.705669 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.705973 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.706151 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.706320 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.706489 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.706667 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.706887 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.707042 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.707792 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.708021 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-br8vj"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.708587 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.709936 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711038 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711171 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711257 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711355 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711580 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711638 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.711982 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.712260 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.712294 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.712494 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.712566 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.712675 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.712856 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713000 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713195 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713333 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713468 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713611 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713779 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.713921 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.714365 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.714977 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.715170 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.718084 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-knrjc"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.718759 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.719331 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.719886 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.724028 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.724674 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.727761 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.727975 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.728122 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.728294 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.728434 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.728681 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.728812 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.729102 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.729271 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.729540 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.729704 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.730293 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.730329 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.730869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.732843 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qgxk6"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.733301 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.753195 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.753230 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.755490 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b9fhb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.755851 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mr259"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.755953 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.756370 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.761091 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.761263 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.765165 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.771447 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776298 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776475 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776519 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776564 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776644 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776664 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776749 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776779 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.776945 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777031 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777098 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777167 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777237 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777492 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777655 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777743 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777819 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.777888 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.778019 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.778143 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.778316 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.778398 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.778485 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.778576 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.780842 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.782121 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wr4p8"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.783711 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784040 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784579 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784597 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784806 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784848 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784868 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784904 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.784943 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.785048 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.785089 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.785115 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.785126 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.785403 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.785509 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.786472 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.787048 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.787186 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.787260 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.787496 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lzd99"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.788086 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.788311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.788404 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.797045 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qgxk6"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.798285 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bgzw"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.806956 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.807403 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.809409 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.829485 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832478 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832667 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8rb9m"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832724 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8l7nb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832737 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvsqz\" (UniqueName: \"kubernetes.io/projected/c168c730-aa75-43dd-8590-857f3d711391-kube-api-access-pvsqz\") pod \"cluster-samples-operator-665b6dd947-hv6t4\" (UID: \"c168c730-aa75-43dd-8590-857f3d711391\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832800 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-encryption-config\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832827 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85c8\" (UniqueName: \"kubernetes.io/projected/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-kube-api-access-d85c8\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-config\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832895 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832917 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3b6732-f761-4d0a-9697-03c81282c267-audit-dir\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832939 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpscs\" (UniqueName: \"kubernetes.io/projected/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-kube-api-access-xpscs\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-client-ca\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.832985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52mjb\" (UniqueName: \"kubernetes.io/projected/26565b8e-93a3-4682-8c20-ee6cb2319543-kube-api-access-52mjb\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cceb72ff-1234-4c76-9387-a3b9250d727a-machine-approver-tls\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a244a2e3-02bd-40e3-ad89-36de630bf3a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833084 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-config\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833105 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-serving-cert\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833126 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0537500b-7cf2-4b68-b0d9-84dff1992304-config\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833150 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-serving-cert\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-trusted-ca-bundle\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833193 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833220 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krs7v\" (UniqueName: \"kubernetes.io/projected/a244a2e3-02bd-40e3-ad89-36de630bf3a8-kube-api-access-krs7v\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833247 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6pzs\" (UniqueName: \"kubernetes.io/projected/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-kube-api-access-t6pzs\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833270 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-audit-dir\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-serving-cert\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833313 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833378 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0537500b-7cf2-4b68-b0d9-84dff1992304-serving-cert\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz5sr\" (UniqueName: \"kubernetes.io/projected/0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8-kube-api-access-pz5sr\") pod \"downloads-7954f5f757-knrjc\" (UID: \"0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8\") " pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833480 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f7e0542-cc87-4c79-8d0f-540577bb44e4-serving-cert\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwws\" (UniqueName: \"kubernetes.io/projected/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-kube-api-access-xjwws\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833859 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-policies\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-client-ca\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833930 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c168c730-aa75-43dd-8590-857f3d711391-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hv6t4\" (UID: \"c168c730-aa75-43dd-8590-857f3d711391\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833951 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-config\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.833985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834009 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834031 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834075 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-images\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834119 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-etcd-client\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834140 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834164 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7f7e0542-cc87-4c79-8d0f-540577bb44e4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834186 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a244a2e3-02bd-40e3-ad89-36de630bf3a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834212 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-node-pullsecrets\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-config\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834254 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-etcd-serving-ca\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834276 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0537500b-7cf2-4b68-b0d9-84dff1992304-trusted-ca\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5mc\" (UniqueName: \"kubernetes.io/projected/7f7e0542-cc87-4c79-8d0f-540577bb44e4-kube-api-access-rn5mc\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-oauth-config\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834355 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6g5\" (UniqueName: \"kubernetes.io/projected/2f13644d-b44d-450c-ac22-88e8a8c6e41d-kube-api-access-9d6g5\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834377 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-oauth-serving-cert\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834423 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.834522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-service-ca\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835662 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835702 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a244a2e3-02bd-40e3-ad89-36de630bf3a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-serving-cert\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-encryption-config\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-serving-cert\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835847 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-dir\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835869 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcv88\" (UniqueName: \"kubernetes.io/projected/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-kube-api-access-xcv88\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835917 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-etcd-client\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.835983 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9l7\" (UniqueName: \"kubernetes.io/projected/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-kube-api-access-sh9l7\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836020 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jdh\" (UniqueName: \"kubernetes.io/projected/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-kube-api-access-p2jdh\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cceb72ff-1234-4c76-9387-a3b9250d727a-auth-proxy-config\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836130 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836161 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-service-ca-bundle\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl8n6\" (UniqueName: \"kubernetes.io/projected/cceb72ff-1234-4c76-9387-a3b9250d727a-kube-api-access-jl8n6\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836272 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-config\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836300 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqtw\" (UniqueName: \"kubernetes.io/projected/0537500b-7cf2-4b68-b0d9-84dff1992304-kube-api-access-5kqtw\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836411 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f13644d-b44d-450c-ac22-88e8a8c6e41d-serving-cert\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836529 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceb72ff-1234-4c76-9387-a3b9250d727a-config\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836563 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-audit\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836586 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-config\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836606 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868cc\" (UniqueName: \"kubernetes.io/projected/ed3b6732-f761-4d0a-9697-03c81282c267-kube-api-access-868cc\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836627 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-image-import-ca\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.836655 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-audit-policies\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.848060 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.848497 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.848930 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.849224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.849606 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.849995 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.852609 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-87g2m"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.852670 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.853224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.854179 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.855476 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.855830 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.856028 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.856288 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.856324 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.856957 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.857314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.859219 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.859654 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.859964 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.860267 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.860453 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.860598 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.861742 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4wb4w"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.862412 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.862583 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.863138 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.863844 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4qdjn"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.864434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.864997 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.866101 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.867094 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.867946 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hc4b6"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.868905 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99gxb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.869106 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.868069 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.870210 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.870608 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.871769 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.875976 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-chhz9"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.876403 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qcpml"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.876566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.876759 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.876944 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.877076 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.877128 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.877188 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.877578 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.877834 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8l7nb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.878747 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-knrjc"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.879773 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.881479 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.885733 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.885786 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.888221 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4wb4w"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.888455 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cspqc"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.889824 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.891644 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tktcd"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.894504 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.895443 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hc4b6"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.895535 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.896796 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.898040 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.899183 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-br8vj"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.900289 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.903041 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4qdjn"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.903095 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.905422 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.910788 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-chhz9"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.910834 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.910860 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-87g2m"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.913482 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.916228 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tktcd"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.917856 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.919079 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99gxb"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.922612 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.922669 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.924892 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.926155 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qphb2"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.926881 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.927322 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qphb2"] Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.932079 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937479 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0537500b-7cf2-4b68-b0d9-84dff1992304-config\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-config\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937527 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-serving-cert\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krs7v\" (UniqueName: \"kubernetes.io/projected/a244a2e3-02bd-40e3-ad89-36de630bf3a8-kube-api-access-krs7v\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937574 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-serving-cert\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-trusted-ca-bundle\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6pzs\" (UniqueName: \"kubernetes.io/projected/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-kube-api-access-t6pzs\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0537500b-7cf2-4b68-b0d9-84dff1992304-serving-cert\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz5sr\" (UniqueName: \"kubernetes.io/projected/0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8-kube-api-access-pz5sr\") pod \"downloads-7954f5f757-knrjc\" (UID: \"0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8\") " pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f7e0542-cc87-4c79-8d0f-540577bb44e4-serving-cert\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937662 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-audit-dir\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937678 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-serving-cert\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937707 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwws\" (UniqueName: \"kubernetes.io/projected/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-kube-api-access-xjwws\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-policies\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937736 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-config\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937752 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-client-ca\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c168c730-aa75-43dd-8590-857f3d711391-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hv6t4\" (UID: \"c168c730-aa75-43dd-8590-857f3d711391\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff663e88-1dd8-4094-b281-7c995a83178f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwwfx\" (UniqueName: \"kubernetes.io/projected/934df675-8026-4579-97a6-c8e843581407-kube-api-access-wwwfx\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/53fc14f9-23dc-4927-812f-507f9525b2f8-srv-cert\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937887 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937902 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-images\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-etcd-client\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937949 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea26078-abf2-4d33-b7ab-9b5602799fe3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7f7e0542-cc87-4c79-8d0f-540577bb44e4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.937984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a244a2e3-02bd-40e3-ad89-36de630bf3a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-node-pullsecrets\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938013 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0537500b-7cf2-4b68-b0d9-84dff1992304-trusted-ca\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938034 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn5mc\" (UniqueName: \"kubernetes.io/projected/7f7e0542-cc87-4c79-8d0f-540577bb44e4-kube-api-access-rn5mc\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938049 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-config\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938064 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-etcd-serving-ca\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-oauth-config\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938092 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f7e3c2-0748-4893-afaf-ca285648f7c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938107 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff663e88-1dd8-4094-b281-7c995a83178f-metrics-tls\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938124 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6g5\" (UniqueName: \"kubernetes.io/projected/2f13644d-b44d-450c-ac22-88e8a8c6e41d-kube-api-access-9d6g5\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938139 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-oauth-serving-cert\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938154 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jg99\" (UniqueName: \"kubernetes.io/projected/82552c6f-59af-4d20-97ca-82384997434e-kube-api-access-5jg99\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938202 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-service-ca\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82552c6f-59af-4d20-97ca-82384997434e-config-volume\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938278 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938294 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938309 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934df675-8026-4579-97a6-c8e843581407-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938326 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7qc\" (UniqueName: \"kubernetes.io/projected/53fc14f9-23dc-4927-812f-507f9525b2f8-kube-api-access-vw7qc\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a244a2e3-02bd-40e3-ad89-36de630bf3a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938354 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-serving-cert\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938368 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-encryption-config\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938383 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcv88\" (UniqueName: \"kubernetes.io/projected/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-kube-api-access-xcv88\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938399 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-etcd-client\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938416 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-serving-cert\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938459 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-dir\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jdh\" (UniqueName: \"kubernetes.io/projected/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-kube-api-access-p2jdh\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938508 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh9l7\" (UniqueName: \"kubernetes.io/projected/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-kube-api-access-sh9l7\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938524 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cceb72ff-1234-4c76-9387-a3b9250d727a-auth-proxy-config\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938555 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934df675-8026-4579-97a6-c8e843581407-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82552c6f-59af-4d20-97ca-82384997434e-secret-volume\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-service-ca-bundle\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938616 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938632 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl8n6\" (UniqueName: \"kubernetes.io/projected/cceb72ff-1234-4c76-9387-a3b9250d727a-kube-api-access-jl8n6\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f7e3c2-0748-4893-afaf-ca285648f7c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938661 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea26078-abf2-4d33-b7ab-9b5602799fe3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938678 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-config\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938709 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kqtw\" (UniqueName: \"kubernetes.io/projected/0537500b-7cf2-4b68-b0d9-84dff1992304-kube-api-access-5kqtw\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938742 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f7e3c2-0748-4893-afaf-ca285648f7c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938761 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938779 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkcd\" (UniqueName: \"kubernetes.io/projected/3da29805-de08-447c-91da-61be0314e49b-kube-api-access-2pkcd\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-audit\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938811 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-config\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938827 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-868cc\" (UniqueName: \"kubernetes.io/projected/ed3b6732-f761-4d0a-9697-03c81282c267-kube-api-access-868cc\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f13644d-b44d-450c-ac22-88e8a8c6e41d-serving-cert\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceb72ff-1234-4c76-9387-a3b9250d727a-config\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-image-import-ca\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938886 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff663e88-1dd8-4094-b281-7c995a83178f-trusted-ca\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2267b\" (UniqueName: \"kubernetes.io/projected/ff663e88-1dd8-4094-b281-7c995a83178f-kube-api-access-2267b\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938916 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-audit-policies\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938931 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea26078-abf2-4d33-b7ab-9b5602799fe3-config\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvsqz\" (UniqueName: \"kubernetes.io/projected/c168c730-aa75-43dd-8590-857f3d711391-kube-api-access-pvsqz\") pod \"cluster-samples-operator-665b6dd947-hv6t4\" (UID: \"c168c730-aa75-43dd-8590-857f3d711391\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7303fd77-01cc-4e85-94f9-26bee2290651-config\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.938994 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-encryption-config\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939011 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d85c8\" (UniqueName: \"kubernetes.io/projected/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-kube-api-access-d85c8\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939043 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-config\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939058 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7303fd77-01cc-4e85-94f9-26bee2290651-serving-cert\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939072 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hz6\" (UniqueName: \"kubernetes.io/projected/7303fd77-01cc-4e85-94f9-26bee2290651-kube-api-access-v6hz6\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939086 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/53fc14f9-23dc-4927-812f-507f9525b2f8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3b6732-f761-4d0a-9697-03c81282c267-audit-dir\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpscs\" (UniqueName: \"kubernetes.io/projected/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-kube-api-access-xpscs\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da29805-de08-447c-91da-61be0314e49b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939174 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cceb72ff-1234-4c76-9387-a3b9250d727a-machine-approver-tls\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939198 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a244a2e3-02bd-40e3-ad89-36de630bf3a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939213 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-client-ca\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939229 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52mjb\" (UniqueName: \"kubernetes.io/projected/26565b8e-93a3-4682-8c20-ee6cb2319543-kube-api-access-52mjb\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3da29805-de08-447c-91da-61be0314e49b-proxy-tls\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.939949 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0537500b-7cf2-4b68-b0d9-84dff1992304-config\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.941027 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-config\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.942026 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.943951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceb72ff-1234-4c76-9387-a3b9250d727a-config\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.944468 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-service-ca-bundle\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.945363 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7f7e0542-cc87-4c79-8d0f-540577bb44e4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.945501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-image-import-ca\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.945760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-node-pullsecrets\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.945887 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-dir\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.946119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-audit-policies\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.946405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-config\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.946800 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-encryption-config\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.946984 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.947263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f13644d-b44d-450c-ac22-88e8a8c6e41d-serving-cert\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.947587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.947595 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-etcd-serving-ca\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.947906 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-trusted-ca-bundle\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.948029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.948140 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-config\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.948242 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-config\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.948375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-serving-cert\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.948446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-etcd-client\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.948947 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.949299 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.949215 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.949128 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-oauth-serving-cert\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.949565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-serving-cert\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.949616 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cceb72ff-1234-4c76-9387-a3b9250d727a-auth-proxy-config\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.949770 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-serving-cert\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.950260 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-policies\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.950301 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-audit\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.950316 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-audit-dir\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.950703 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0537500b-7cf2-4b68-b0d9-84dff1992304-trusted-ca\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.951000 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cceb72ff-1234-4c76-9387-a3b9250d727a-machine-approver-tls\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.951339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-oauth-config\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.951384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0537500b-7cf2-4b68-b0d9-84dff1992304-serving-cert\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.951572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.952151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-client-ca\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.952627 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a244a2e3-02bd-40e3-ad89-36de630bf3a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.952764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.952832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953030 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3b6732-f761-4d0a-9697-03c81282c267-audit-dir\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953145 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f7e0542-cc87-4c79-8d0f-540577bb44e4-serving-cert\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953213 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-service-ca\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.954365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953383 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-client-ca\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953748 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953906 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-encryption-config\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-images\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.954347 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3b6732-f761-4d0a-9697-03c81282c267-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.953576 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-config\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.954659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.954776 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.955163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-serving-cert\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.955626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.955730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.956060 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.956275 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a244a2e3-02bd-40e3-ad89-36de630bf3a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.956723 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ed3b6732-f761-4d0a-9697-03c81282c267-etcd-client\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.957146 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-serving-cert\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.973240 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 20:11:04 crc kubenswrapper[4793]: I0217 20:11:04.992534 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.012307 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.016341 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.016459 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-config\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.019479 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c168c730-aa75-43dd-8590-857f3d711391-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-hv6t4\" (UID: \"c168c730-aa75-43dd-8590-857f3d711391\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.032701 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.039830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82552c6f-59af-4d20-97ca-82384997434e-secret-volume\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.039884 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934df675-8026-4579-97a6-c8e843581407-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.039902 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f7e3c2-0748-4893-afaf-ca285648f7c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.039921 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea26078-abf2-4d33-b7ab-9b5602799fe3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.039968 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f7e3c2-0748-4893-afaf-ca285648f7c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.039985 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkcd\" (UniqueName: \"kubernetes.io/projected/3da29805-de08-447c-91da-61be0314e49b-kube-api-access-2pkcd\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff663e88-1dd8-4094-b281-7c995a83178f-trusted-ca\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040045 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2267b\" (UniqueName: \"kubernetes.io/projected/ff663e88-1dd8-4094-b281-7c995a83178f-kube-api-access-2267b\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea26078-abf2-4d33-b7ab-9b5602799fe3-config\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7303fd77-01cc-4e85-94f9-26bee2290651-config\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7303fd77-01cc-4e85-94f9-26bee2290651-serving-cert\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6hz6\" (UniqueName: \"kubernetes.io/projected/7303fd77-01cc-4e85-94f9-26bee2290651-kube-api-access-v6hz6\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040189 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da29805-de08-447c-91da-61be0314e49b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040206 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/53fc14f9-23dc-4927-812f-507f9525b2f8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3da29805-de08-447c-91da-61be0314e49b-proxy-tls\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040303 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff663e88-1dd8-4094-b281-7c995a83178f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040318 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwwfx\" (UniqueName: \"kubernetes.io/projected/934df675-8026-4579-97a6-c8e843581407-kube-api-access-wwwfx\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/53fc14f9-23dc-4927-812f-507f9525b2f8-srv-cert\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040375 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea26078-abf2-4d33-b7ab-9b5602799fe3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040407 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f7e3c2-0748-4893-afaf-ca285648f7c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff663e88-1dd8-4094-b281-7c995a83178f-metrics-tls\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040443 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jg99\" (UniqueName: \"kubernetes.io/projected/82552c6f-59af-4d20-97ca-82384997434e-kube-api-access-5jg99\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934df675-8026-4579-97a6-c8e843581407-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040493 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7qc\" (UniqueName: \"kubernetes.io/projected/53fc14f9-23dc-4927-812f-507f9525b2f8-kube-api-access-vw7qc\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.040509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82552c6f-59af-4d20-97ca-82384997434e-config-volume\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.041134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea26078-abf2-4d33-b7ab-9b5602799fe3-config\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.041285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3da29805-de08-447c-91da-61be0314e49b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.045363 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/53fc14f9-23dc-4927-812f-507f9525b2f8-srv-cert\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.046740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aea26078-abf2-4d33-b7ab-9b5602799fe3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.053129 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.065304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82552c6f-59af-4d20-97ca-82384997434e-secret-volume\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.065494 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/53fc14f9-23dc-4927-812f-507f9525b2f8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.072873 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.093107 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.134258 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.134769 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.141286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff663e88-1dd8-4094-b281-7c995a83178f-trusted-ca\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.153107 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.165428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff663e88-1dd8-4094-b281-7c995a83178f-metrics-tls\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.172328 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.192679 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.212568 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.233411 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.252184 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.272894 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.292737 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.313761 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.332763 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.353783 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.373349 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.386155 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3da29805-de08-447c-91da-61be0314e49b-proxy-tls\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.393128 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.412737 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.426175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7f7e3c2-0748-4893-afaf-ca285648f7c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.434140 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.453257 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.463203 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f7e3c2-0748-4893-afaf-ca285648f7c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.473673 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.492519 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.513503 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.523156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82552c6f-59af-4d20-97ca-82384997434e-config-volume\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.533134 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.554121 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.562208 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934df675-8026-4579-97a6-c8e843581407-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.573634 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.593675 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.612922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.627129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7303fd77-01cc-4e85-94f9-26bee2290651-serving-cert\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.635775 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.642763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7303fd77-01cc-4e85-94f9-26bee2290651-config\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.652532 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.674182 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.693108 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.707253 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934df675-8026-4579-97a6-c8e843581407-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.713515 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.732933 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.753104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.773873 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.793471 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.813822 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.833381 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.853330 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.870916 4793 request.go:700] Waited for 1.006257443s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.872662 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.893633 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.913658 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.932788 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.953378 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.973387 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 20:11:05 crc kubenswrapper[4793]: I0217 20:11:05.992604 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.013219 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.032902 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.053260 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.073155 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.092945 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.113404 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.133076 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.153599 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.188517 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.193261 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.213441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.234989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.252993 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.272515 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.293251 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.313334 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.333163 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.353786 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.373143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.396594 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.412672 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.432620 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.453780 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.473329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.492884 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.512643 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.533391 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.552991 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.572644 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.593175 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.613020 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.633087 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.653336 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.673360 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.693506 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.733511 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.753614 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.772854 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.819196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-868cc\" (UniqueName: \"kubernetes.io/projected/ed3b6732-f761-4d0a-9697-03c81282c267-kube-api-access-868cc\") pod \"apiserver-7bbb656c7d-vhb75\" (UID: \"ed3b6732-f761-4d0a-9697-03c81282c267\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.841555 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcv88\" (UniqueName: \"kubernetes.io/projected/96cce4f5-80bd-4c5a-a6cb-2037ccb543a0-kube-api-access-xcv88\") pod \"openshift-controller-manager-operator-756b6f6bc6-kxxq7\" (UID: \"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.865061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krs7v\" (UniqueName: \"kubernetes.io/projected/a244a2e3-02bd-40e3-ad89-36de630bf3a8-kube-api-access-krs7v\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.871371 4793 request.go:700] Waited for 1.925557186s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.872333 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a244a2e3-02bd-40e3-ad89-36de630bf3a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zmnqw\" (UID: \"a244a2e3-02bd-40e3-ad89-36de630bf3a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.891286 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl8n6\" (UniqueName: \"kubernetes.io/projected/cceb72ff-1234-4c76-9387-a3b9250d727a-kube-api-access-jl8n6\") pod \"machine-approver-56656f9798-d6c8j\" (UID: \"cceb72ff-1234-4c76-9387-a3b9250d727a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.909530 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.917113 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jdh\" (UniqueName: \"kubernetes.io/projected/dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede-kube-api-access-p2jdh\") pod \"openshift-apiserver-operator-796bbdcf4f-tpcvd\" (UID: \"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.932289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh9l7\" (UniqueName: \"kubernetes.io/projected/881cfa9d-a35a-4088-8a39-b4ecd52a3b37-kube-api-access-sh9l7\") pod \"machine-api-operator-5694c8668f-lzd99\" (UID: \"881cfa9d-a35a-4088-8a39-b4ecd52a3b37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.951446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvsqz\" (UniqueName: \"kubernetes.io/projected/c168c730-aa75-43dd-8590-857f3d711391-kube-api-access-pvsqz\") pod \"cluster-samples-operator-665b6dd947-hv6t4\" (UID: \"c168c730-aa75-43dd-8590-857f3d711391\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.960314 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.973021 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kqtw\" (UniqueName: \"kubernetes.io/projected/0537500b-7cf2-4b68-b0d9-84dff1992304-kube-api-access-5kqtw\") pod \"console-operator-58897d9998-8rb9m\" (UID: \"0537500b-7cf2-4b68-b0d9-84dff1992304\") " pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:06 crc kubenswrapper[4793]: I0217 20:11:06.988289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d85c8\" (UniqueName: \"kubernetes.io/projected/0cdcfee9-95e5-4030-b6e1-d0dac0bc9651-kube-api-access-d85c8\") pod \"apiserver-76f77b778f-wr4p8\" (UID: \"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651\") " pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.008869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.012458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn5mc\" (UniqueName: \"kubernetes.io/projected/7f7e0542-cc87-4c79-8d0f-540577bb44e4-kube-api-access-rn5mc\") pod \"openshift-config-operator-7777fb866f-mr259\" (UID: \"7f7e0542-cc87-4c79-8d0f-540577bb44e4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.019183 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.027069 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.031216 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6pzs\" (UniqueName: \"kubernetes.io/projected/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-kube-api-access-t6pzs\") pod \"route-controller-manager-6576b87f9c-qpst4\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.037847 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.055465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6g5\" (UniqueName: \"kubernetes.io/projected/2f13644d-b44d-450c-ac22-88e8a8c6e41d-kube-api-access-9d6g5\") pod \"controller-manager-879f6c89f-qgxk6\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.067761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz5sr\" (UniqueName: \"kubernetes.io/projected/0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8-kube-api-access-pz5sr\") pod \"downloads-7954f5f757-knrjc\" (UID: \"0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8\") " pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.093702 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52mjb\" (UniqueName: \"kubernetes.io/projected/26565b8e-93a3-4682-8c20-ee6cb2319543-kube-api-access-52mjb\") pod \"oauth-openshift-558db77b4-2bgzw\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.111423 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpscs\" (UniqueName: \"kubernetes.io/projected/c576e65d-f7b1-46f3-976b-f4726ee1ddf3-kube-api-access-xpscs\") pod \"authentication-operator-69f744f599-b9fhb\" (UID: \"c576e65d-f7b1-46f3-976b-f4726ee1ddf3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.117313 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.125509 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.129484 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwws\" (UniqueName: \"kubernetes.io/projected/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-kube-api-access-xjwws\") pod \"console-f9d7485db-br8vj\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.148038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea26078-abf2-4d33-b7ab-9b5602799fe3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9p2hl\" (UID: \"aea26078-abf2-4d33-b7ab-9b5602799fe3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.169726 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7f7e3c2-0748-4893-afaf-ca285648f7c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-58vqk\" (UID: \"d7f7e3c2-0748-4893-afaf-ca285648f7c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.186500 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.195712 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkcd\" (UniqueName: \"kubernetes.io/projected/3da29805-de08-447c-91da-61be0314e49b-kube-api-access-2pkcd\") pod \"machine-config-controller-84d6567774-zq7jg\" (UID: \"3da29805-de08-447c-91da-61be0314e49b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.208045 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.209954 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2267b\" (UniqueName: \"kubernetes.io/projected/ff663e88-1dd8-4094-b281-7c995a83178f-kube-api-access-2267b\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.217861 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.231515 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff663e88-1dd8-4094-b281-7c995a83178f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tj8df\" (UID: \"ff663e88-1dd8-4094-b281-7c995a83178f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.234778 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.236817 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.246766 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.247050 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwwfx\" (UniqueName: \"kubernetes.io/projected/934df675-8026-4579-97a6-c8e843581407-kube-api-access-wwwfx\") pod \"kube-storage-version-migrator-operator-b67b599dd-fmmf6\" (UID: \"934df675-8026-4579-97a6-c8e843581407\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.253489 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.266530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6hz6\" (UniqueName: \"kubernetes.io/projected/7303fd77-01cc-4e85-94f9-26bee2290651-kube-api-access-v6hz6\") pod \"service-ca-operator-777779d784-v4fgc\" (UID: \"7303fd77-01cc-4e85-94f9-26bee2290651\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.278329 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" event={"ID":"cceb72ff-1234-4c76-9387-a3b9250d727a","Type":"ContainerStarted","Data":"4c8a1de93eaed9df4692fc2a4bd7fa19e2ac468716104d2be8c712c07815107f"} Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.278645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.290780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7qc\" (UniqueName: \"kubernetes.io/projected/53fc14f9-23dc-4927-812f-507f9525b2f8-kube-api-access-vw7qc\") pod \"olm-operator-6b444d44fb-g69jb\" (UID: \"53fc14f9-23dc-4927-812f-507f9525b2f8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.291039 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.309218 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jg99\" (UniqueName: \"kubernetes.io/projected/82552c6f-59af-4d20-97ca-82384997434e-kube-api-access-5jg99\") pod \"collect-profiles-29522640-pzkfn\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.347968 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.371297 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374460 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/34a020ed-794d-4b17-a8a4-a761c7fcc50d-signing-cabundle\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8shz\" (UniqueName: \"kubernetes.io/projected/34a020ed-794d-4b17-a8a4-a761c7fcc50d-kube-api-access-p8shz\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374506 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5x8\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-kube-api-access-dt5x8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374524 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfa45f5-3f15-4f20-823d-17b08bd674d7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374547 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374564 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-trusted-ca\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374586 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfa45f5-3f15-4f20-823d-17b08bd674d7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d43c3902-fd03-4c2c-a678-988b45dd62d0-webhook-cert\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374626 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d43c3902-fd03-4c2c-a678-988b45dd62d0-apiservice-cert\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374652 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdrpt\" (UniqueName: \"kubernetes.io/projected/080011e4-f1f0-46eb-81f0-64f82dd6e8e1-kube-api-access-cdrpt\") pod \"migrator-59844c95c7-hrmm7\" (UID: \"080011e4-f1f0-46eb-81f0-64f82dd6e8e1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.374676 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-certificates\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.375754 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-tls\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.375783 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/34a020ed-794d-4b17-a8a4-a761c7fcc50d-signing-key\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.375799 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxcp\" (UniqueName: \"kubernetes.io/projected/d43c3902-fd03-4c2c-a678-988b45dd62d0-kube-api-access-zpxcp\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.375820 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d43c3902-fd03-4c2c-a678-988b45dd62d0-tmpfs\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.375835 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-bound-sa-token\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.376113 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:07.876101943 +0000 UTC m=+143.167800254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.396162 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.404364 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.409946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.439758 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.446419 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476464 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476640 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/780a5995-cbd8-45ae-a25b-c1d4b2190542-metrics-tls\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476660 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54895b59-e973-4dcf-90eb-f7660227bd6e-cert\") pod \"ingress-canary-4qdjn\" (UID: \"54895b59-e973-4dcf-90eb-f7660227bd6e\") " pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476714 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtfk4\" (UniqueName: \"kubernetes.io/projected/7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec-kube-api-access-qtfk4\") pod \"dns-operator-744455d44c-4wb4w\" (UID: \"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec\") " pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476736 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-tls\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476772 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/34a020ed-794d-4b17-a8a4-a761c7fcc50d-signing-key\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxcp\" (UniqueName: \"kubernetes.io/projected/d43c3902-fd03-4c2c-a678-988b45dd62d0-kube-api-access-zpxcp\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476805 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt5kd\" (UniqueName: \"kubernetes.io/projected/54895b59-e973-4dcf-90eb-f7660227bd6e-kube-api-access-xt5kd\") pod \"ingress-canary-4qdjn\" (UID: \"54895b59-e973-4dcf-90eb-f7660227bd6e\") " pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476842 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d43c3902-fd03-4c2c-a678-988b45dd62d0-tmpfs\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f46bb13-0fc5-4acf-8f35-45b134bf590c-config\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476921 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-bound-sa-token\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-default-certificate\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476962 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/34a020ed-794d-4b17-a8a4-a761c7fcc50d-signing-cabundle\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.476984 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8shz\" (UniqueName: \"kubernetes.io/projected/34a020ed-794d-4b17-a8a4-a761c7fcc50d-kube-api-access-p8shz\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477024 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5x8\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-kube-api-access-dt5x8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-client\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477056 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7jt\" (UniqueName: \"kubernetes.io/projected/dff7e56c-8f87-477c-b1a5-908263f5fca5-kube-api-access-9g7jt\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477070 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35681a96-27de-42f9-99a1-bf999199ecec-images\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-service-ca\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477105 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfa45f5-3f15-4f20-823d-17b08bd674d7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477119 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-metrics-certs\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dgww\" (UniqueName: \"kubernetes.io/projected/97c744eb-5350-407d-8661-5d20a87110d5-kube-api-access-7dgww\") pod \"package-server-manager-789f6589d5-m4r4g\" (UID: \"97c744eb-5350-407d-8661-5d20a87110d5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6f77\" (UniqueName: \"kubernetes.io/projected/01569bd6-0c03-4cb4-ac0a-897db7189161-kube-api-access-f6f77\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477204 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dff7e56c-8f87-477c-b1a5-908263f5fca5-profile-collector-cert\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477221 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-trusted-ca\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477234 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32dbc133-34ed-449e-a397-ff3f0b83418c-service-ca-bundle\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477261 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477276 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-plugins-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477299 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec-metrics-tls\") pod \"dns-operator-744455d44c-4wb4w\" (UID: \"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec\") " pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477315 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnjck\" (UniqueName: \"kubernetes.io/projected/35681a96-27de-42f9-99a1-bf999199ecec-kube-api-access-tnjck\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477348 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfa45f5-3f15-4f20-823d-17b08bd674d7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477366 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dcb373df-1dcb-4517-b9a8-0d3e548c444a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hc4b6\" (UID: \"dcb373df-1dcb-4517-b9a8-0d3e548c444a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpk8\" (UniqueName: \"kubernetes.io/projected/780a5995-cbd8-45ae-a25b-c1d4b2190542-kube-api-access-rmpk8\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477395 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f46bb13-0fc5-4acf-8f35-45b134bf590c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477438 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns9sz\" (UniqueName: \"kubernetes.io/projected/6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65-kube-api-access-ns9sz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9z7w\" (UID: \"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477466 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d43c3902-fd03-4c2c-a678-988b45dd62d0-webhook-cert\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477509 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9z7w\" (UID: \"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477545 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35681a96-27de-42f9-99a1-bf999199ecec-proxy-tls\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477595 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97c744eb-5350-407d-8661-5d20a87110d5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m4r4g\" (UID: \"97c744eb-5350-407d-8661-5d20a87110d5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d43c3902-fd03-4c2c-a678-988b45dd62d0-apiservice-cert\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477644 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m94x\" (UniqueName: \"kubernetes.io/projected/32dbc133-34ed-449e-a397-ff3f0b83418c-kube-api-access-9m94x\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-mountpoint-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477735 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35681a96-27de-42f9-99a1-bf999199ecec-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477762 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdrpt\" (UniqueName: \"kubernetes.io/projected/080011e4-f1f0-46eb-81f0-64f82dd6e8e1-kube-api-access-cdrpt\") pod \"migrator-59844c95c7-hrmm7\" (UID: \"080011e4-f1f0-46eb-81f0-64f82dd6e8e1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477780 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-certificates\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477851 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1fec83-8d01-446f-80f1-2c5f815f44bc-serving-cert\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.477871 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:07.977848447 +0000 UTC m=+143.269546748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-registration-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.477939 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vwfg\" (UniqueName: \"kubernetes.io/projected/dcb373df-1dcb-4517-b9a8-0d3e548c444a-kube-api-access-4vwfg\") pod \"multus-admission-controller-857f4d67dd-hc4b6\" (UID: \"dcb373df-1dcb-4517-b9a8-0d3e548c444a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-config\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478037 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478067 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-stats-auth\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478085 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/780a5995-cbd8-45ae-a25b-c1d4b2190542-config-volume\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478100 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/16bf7ecb-6485-4941-9d64-181dff5a3da1-node-bootstrap-token\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478138 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f46bb13-0fc5-4acf-8f35-45b134bf590c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478177 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-socket-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478196 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dff7e56c-8f87-477c-b1a5-908263f5fca5-srv-cert\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478225 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-ca\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478240 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ngt\" (UniqueName: \"kubernetes.io/projected/16bf7ecb-6485-4941-9d64-181dff5a3da1-kube-api-access-j6ngt\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtg78\" (UniqueName: \"kubernetes.io/projected/2d1fec83-8d01-446f-80f1-2c5f815f44bc-kube-api-access-rtg78\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-csi-data-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478358 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/16bf7ecb-6485-4941-9d64-181dff5a3da1-certs\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.478401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrq7\" (UniqueName: \"kubernetes.io/projected/83495f96-c3b0-4871-876c-07832519e1d8-kube-api-access-nkrq7\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.479276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-trusted-ca\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.480091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfa45f5-3f15-4f20-823d-17b08bd674d7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.480604 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d43c3902-fd03-4c2c-a678-988b45dd62d0-tmpfs\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.481440 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.483918 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d43c3902-fd03-4c2c-a678-988b45dd62d0-apiservice-cert\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.484476 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-certificates\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.485593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d43c3902-fd03-4c2c-a678-988b45dd62d0-webhook-cert\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.485870 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/34a020ed-794d-4b17-a8a4-a761c7fcc50d-signing-cabundle\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.490225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/34a020ed-794d-4b17-a8a4-a761c7fcc50d-signing-key\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.490532 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.493975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-tls\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.494053 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.495154 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.499387 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfa45f5-3f15-4f20-823d-17b08bd674d7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.508277 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.509822 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdrpt\" (UniqueName: \"kubernetes.io/projected/080011e4-f1f0-46eb-81f0-64f82dd6e8e1-kube-api-access-cdrpt\") pod \"migrator-59844c95c7-hrmm7\" (UID: \"080011e4-f1f0-46eb-81f0-64f82dd6e8e1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.545026 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5x8\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-kube-api-access-dt5x8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: W0217 20:11:07.550102 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddba5aa8f_e4aa_4bb0_b8d4_dac193fa7ede.slice/crio-04ee3de3f789152184e3e2fc1ff1c9f293d88a3224050ecc59160bc980d24192 WatchSource:0}: Error finding container 04ee3de3f789152184e3e2fc1ff1c9f293d88a3224050ecc59160bc980d24192: Status 404 returned error can't find the container with id 04ee3de3f789152184e3e2fc1ff1c9f293d88a3224050ecc59160bc980d24192 Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.550151 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mr259"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.550175 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wr4p8"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.556943 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8shz\" (UniqueName: \"kubernetes.io/projected/34a020ed-794d-4b17-a8a4-a761c7fcc50d-kube-api-access-p8shz\") pod \"service-ca-9c57cc56f-87g2m\" (UID: \"34a020ed-794d-4b17-a8a4-a761c7fcc50d\") " pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.578130 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bgzw"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580599 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f46bb13-0fc5-4acf-8f35-45b134bf590c-config\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-default-certificate\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-client\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580682 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7jt\" (UniqueName: \"kubernetes.io/projected/dff7e56c-8f87-477c-b1a5-908263f5fca5-kube-api-access-9g7jt\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580713 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35681a96-27de-42f9-99a1-bf999199ecec-images\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580732 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-metrics-certs\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-service-ca\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dgww\" (UniqueName: \"kubernetes.io/projected/97c744eb-5350-407d-8661-5d20a87110d5-kube-api-access-7dgww\") pod \"package-server-manager-789f6589d5-m4r4g\" (UID: \"97c744eb-5350-407d-8661-5d20a87110d5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580798 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6f77\" (UniqueName: \"kubernetes.io/projected/01569bd6-0c03-4cb4-ac0a-897db7189161-kube-api-access-f6f77\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580813 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32dbc133-34ed-449e-a397-ff3f0b83418c-service-ca-bundle\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dff7e56c-8f87-477c-b1a5-908263f5fca5-profile-collector-cert\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580846 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580859 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-plugins-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580881 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnjck\" (UniqueName: \"kubernetes.io/projected/35681a96-27de-42f9-99a1-bf999199ecec-kube-api-access-tnjck\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580897 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dcb373df-1dcb-4517-b9a8-0d3e548c444a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hc4b6\" (UID: \"dcb373df-1dcb-4517-b9a8-0d3e548c444a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpk8\" (UniqueName: \"kubernetes.io/projected/780a5995-cbd8-45ae-a25b-c1d4b2190542-kube-api-access-rmpk8\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580929 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec-metrics-tls\") pod \"dns-operator-744455d44c-4wb4w\" (UID: \"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec\") " pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580945 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f46bb13-0fc5-4acf-8f35-45b134bf590c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580963 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns9sz\" (UniqueName: \"kubernetes.io/projected/6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65-kube-api-access-ns9sz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9z7w\" (UID: \"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.580982 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9z7w\" (UID: \"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35681a96-27de-42f9-99a1-bf999199ecec-proxy-tls\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581021 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97c744eb-5350-407d-8661-5d20a87110d5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m4r4g\" (UID: \"97c744eb-5350-407d-8661-5d20a87110d5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581039 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m94x\" (UniqueName: \"kubernetes.io/projected/32dbc133-34ed-449e-a397-ff3f0b83418c-kube-api-access-9m94x\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581055 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-mountpoint-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35681a96-27de-42f9-99a1-bf999199ecec-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581099 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1fec83-8d01-446f-80f1-2c5f815f44bc-serving-cert\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vwfg\" (UniqueName: \"kubernetes.io/projected/dcb373df-1dcb-4517-b9a8-0d3e548c444a-kube-api-access-4vwfg\") pod \"multus-admission-controller-857f4d67dd-hc4b6\" (UID: \"dcb373df-1dcb-4517-b9a8-0d3e548c444a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581131 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-registration-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-config\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581181 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-stats-auth\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581196 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/780a5995-cbd8-45ae-a25b-c1d4b2190542-config-volume\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/16bf7ecb-6485-4941-9d64-181dff5a3da1-node-bootstrap-token\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f46bb13-0fc5-4acf-8f35-45b134bf590c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581249 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-socket-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dff7e56c-8f87-477c-b1a5-908263f5fca5-srv-cert\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581274 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f46bb13-0fc5-4acf-8f35-45b134bf590c-config\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581277 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-ca\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6ngt\" (UniqueName: \"kubernetes.io/projected/16bf7ecb-6485-4941-9d64-181dff5a3da1-kube-api-access-j6ngt\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581354 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtg78\" (UniqueName: \"kubernetes.io/projected/2d1fec83-8d01-446f-80f1-2c5f815f44bc-kube-api-access-rtg78\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581374 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-csi-data-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581391 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/16bf7ecb-6485-4941-9d64-181dff5a3da1-certs\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581408 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrq7\" (UniqueName: \"kubernetes.io/projected/83495f96-c3b0-4871-876c-07832519e1d8-kube-api-access-nkrq7\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581429 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/780a5995-cbd8-45ae-a25b-c1d4b2190542-metrics-tls\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581443 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54895b59-e973-4dcf-90eb-f7660227bd6e-cert\") pod \"ingress-canary-4qdjn\" (UID: \"54895b59-e973-4dcf-90eb-f7660227bd6e\") " pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581458 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtfk4\" (UniqueName: \"kubernetes.io/projected/7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec-kube-api-access-qtfk4\") pod \"dns-operator-744455d44c-4wb4w\" (UID: \"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec\") " pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581487 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt5kd\" (UniqueName: \"kubernetes.io/projected/54895b59-e973-4dcf-90eb-f7660227bd6e-kube-api-access-xt5kd\") pod \"ingress-canary-4qdjn\" (UID: \"54895b59-e973-4dcf-90eb-f7660227bd6e\") " pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.581742 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-csi-data-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.582113 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-ca\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.584990 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-default-certificate\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.585538 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/780a5995-cbd8-45ae-a25b-c1d4b2190542-metrics-tls\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.586248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-registration-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.586416 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35681a96-27de-42f9-99a1-bf999199ecec-images\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.587509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/780a5995-cbd8-45ae-a25b-c1d4b2190542-config-volume\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.587664 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-client\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.587762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-socket-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.588002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-config\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.589861 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-mountpoint-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.589982 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9z7w\" (UID: \"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.590705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35681a96-27de-42f9-99a1-bf999199ecec-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.591038 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.091023856 +0000 UTC m=+143.382722167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.591911 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d1fec83-8d01-446f-80f1-2c5f815f44bc-etcd-service-ca\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.592450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.592615 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32dbc133-34ed-449e-a397-ff3f0b83418c-service-ca-bundle\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.593934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/01569bd6-0c03-4cb4-ac0a-897db7189161-plugins-dir\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.602755 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/16bf7ecb-6485-4941-9d64-181dff5a3da1-certs\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.603024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.603047 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lzd99"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.604165 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d1fec83-8d01-446f-80f1-2c5f815f44bc-serving-cert\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.604455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-metrics-certs\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.605501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxcp\" (UniqueName: \"kubernetes.io/projected/d43c3902-fd03-4c2c-a678-988b45dd62d0-kube-api-access-zpxcp\") pod \"packageserver-d55dfcdfc-d5mb8\" (UID: \"d43c3902-fd03-4c2c-a678-988b45dd62d0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.606050 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dff7e56c-8f87-477c-b1a5-908263f5fca5-profile-collector-cert\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.606774 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dcb373df-1dcb-4517-b9a8-0d3e548c444a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hc4b6\" (UID: \"dcb373df-1dcb-4517-b9a8-0d3e548c444a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.608007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/35681a96-27de-42f9-99a1-bf999199ecec-proxy-tls\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.608404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dff7e56c-8f87-477c-b1a5-908263f5fca5-srv-cert\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.608841 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/97c744eb-5350-407d-8661-5d20a87110d5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m4r4g\" (UID: \"97c744eb-5350-407d-8661-5d20a87110d5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.612142 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/54895b59-e973-4dcf-90eb-f7660227bd6e-cert\") pod \"ingress-canary-4qdjn\" (UID: \"54895b59-e973-4dcf-90eb-f7660227bd6e\") " pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.612399 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8rb9m"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.625839 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec-metrics-tls\") pod \"dns-operator-744455d44c-4wb4w\" (UID: \"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec\") " pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.626372 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/16bf7ecb-6485-4941-9d64-181dff5a3da1-node-bootstrap-token\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.626448 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f46bb13-0fc5-4acf-8f35-45b134bf590c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.627290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/32dbc133-34ed-449e-a397-ff3f0b83418c-stats-auth\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.627959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-bound-sa-token\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.634639 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b9fhb"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.657721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt5kd\" (UniqueName: \"kubernetes.io/projected/54895b59-e973-4dcf-90eb-f7660227bd6e-kube-api-access-xt5kd\") pod \"ingress-canary-4qdjn\" (UID: \"54895b59-e973-4dcf-90eb-f7660227bd6e\") " pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.670587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6ngt\" (UniqueName: \"kubernetes.io/projected/16bf7ecb-6485-4941-9d64-181dff5a3da1-kube-api-access-j6ngt\") pod \"machine-config-server-cspqc\" (UID: \"16bf7ecb-6485-4941-9d64-181dff5a3da1\") " pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.682704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.682993 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.182936954 +0000 UTC m=+143.474635275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.683416 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.683890 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.183879526 +0000 UTC m=+143.475577827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.703019 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtg78\" (UniqueName: \"kubernetes.io/projected/2d1fec83-8d01-446f-80f1-2c5f815f44bc-kube-api-access-rtg78\") pod \"etcd-operator-b45778765-chhz9\" (UID: \"2d1fec83-8d01-446f-80f1-2c5f815f44bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.713047 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns9sz\" (UniqueName: \"kubernetes.io/projected/6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65-kube-api-access-ns9sz\") pod \"control-plane-machine-set-operator-78cbb6b69f-x9z7w\" (UID: \"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.718124 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.731124 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7jt\" (UniqueName: \"kubernetes.io/projected/dff7e56c-8f87-477c-b1a5-908263f5fca5-kube-api-access-9g7jt\") pod \"catalog-operator-68c6474976-lltb7\" (UID: \"dff7e56c-8f87-477c-b1a5-908263f5fca5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.734767 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.771081 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.774234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrq7\" (UniqueName: \"kubernetes.io/projected/83495f96-c3b0-4871-876c-07832519e1d8-kube-api-access-nkrq7\") pod \"marketplace-operator-79b997595-99gxb\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.779445 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f46bb13-0fc5-4acf-8f35-45b134bf590c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5p2q\" (UID: \"2f46bb13-0fc5-4acf-8f35-45b134bf590c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.785652 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.786048 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.28601634 +0000 UTC m=+143.577714661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.789203 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m94x\" (UniqueName: \"kubernetes.io/projected/32dbc133-34ed-449e-a397-ff3f0b83418c-kube-api-access-9m94x\") pod \"router-default-5444994796-qcpml\" (UID: \"32dbc133-34ed-449e-a397-ff3f0b83418c\") " pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.816820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtfk4\" (UniqueName: \"kubernetes.io/projected/7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec-kube-api-access-qtfk4\") pod \"dns-operator-744455d44c-4wb4w\" (UID: \"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec\") " pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.820732 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.846010 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vwfg\" (UniqueName: \"kubernetes.io/projected/dcb373df-1dcb-4517-b9a8-0d3e548c444a-kube-api-access-4vwfg\") pod \"multus-admission-controller-857f4d67dd-hc4b6\" (UID: \"dcb373df-1dcb-4517-b9a8-0d3e548c444a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.847541 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4qdjn" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.851728 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.853443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dgww\" (UniqueName: \"kubernetes.io/projected/97c744eb-5350-407d-8661-5d20a87110d5-kube-api-access-7dgww\") pod \"package-server-manager-789f6589d5-m4r4g\" (UID: \"97c744eb-5350-407d-8661-5d20a87110d5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.861628 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.870880 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6f77\" (UniqueName: \"kubernetes.io/projected/01569bd6-0c03-4cb4-ac0a-897db7189161-kube-api-access-f6f77\") pod \"csi-hostpathplugin-tktcd\" (UID: \"01569bd6-0c03-4cb4-ac0a-897db7189161\") " pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.872632 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.879777 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.885557 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-knrjc"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.887265 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.887742 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.888073 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.388061351 +0000 UTC m=+143.679759652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.888436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.895236 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnjck\" (UniqueName: \"kubernetes.io/projected/35681a96-27de-42f9-99a1-bf999199ecec-kube-api-access-tnjck\") pod \"machine-config-operator-74547568cd-s6t4n\" (UID: \"35681a96-27de-42f9-99a1-bf999199ecec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.895807 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-br8vj"] Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.896655 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.898085 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.907450 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cspqc" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.910316 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpk8\" (UniqueName: \"kubernetes.io/projected/780a5995-cbd8-45ae-a25b-c1d4b2190542-kube-api-access-rmpk8\") pod \"dns-default-qphb2\" (UID: \"780a5995-cbd8-45ae-a25b-c1d4b2190542\") " pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.927437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.935601 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:07 crc kubenswrapper[4793]: I0217 20:11:07.992810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:07 crc kubenswrapper[4793]: E0217 20:11:07.993183 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.493159026 +0000 UTC m=+143.784857337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.022493 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qgxk6"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.027590 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.088057 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.101269 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.101646 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.601632202 +0000 UTC m=+143.893330513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.121645 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.138466 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.141037 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.165807 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.204680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.205298 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.705283062 +0000 UTC m=+143.996981373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: W0217 20:11:08.215953 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff663e88_1dd8_4094_b281_7c995a83178f.slice/crio-5406bc754f8fdca2ec5e58911d3d49a40c4900eb2b6a3c44e6b5372ce23534ae WatchSource:0}: Error finding container 5406bc754f8fdca2ec5e58911d3d49a40c4900eb2b6a3c44e6b5372ce23534ae: Status 404 returned error can't find the container with id 5406bc754f8fdca2ec5e58911d3d49a40c4900eb2b6a3c44e6b5372ce23534ae Feb 17 20:11:08 crc kubenswrapper[4793]: W0217 20:11:08.225996 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da29805_de08_447c_91da_61be0314e49b.slice/crio-6cef9c0c07d92a07d57e177e1adbd443dbabb415f65b8e5c5db87441d3af87b5 WatchSource:0}: Error finding container 6cef9c0c07d92a07d57e177e1adbd443dbabb415f65b8e5c5db87441d3af87b5: Status 404 returned error can't find the container with id 6cef9c0c07d92a07d57e177e1adbd443dbabb415f65b8e5c5db87441d3af87b5 Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.237201 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.265033 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.307759 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.309659 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.808259565 +0000 UTC m=+144.099957916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.353207 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" event={"ID":"ff663e88-1dd8-4094-b281-7c995a83178f","Type":"ContainerStarted","Data":"5406bc754f8fdca2ec5e58911d3d49a40c4900eb2b6a3c44e6b5372ce23534ae"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.358359 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" event={"ID":"c576e65d-f7b1-46f3-976b-f4726ee1ddf3","Type":"ContainerStarted","Data":"cea809c3784a82076da4554652d863d920d0b8d5246d91c1e7b1894b65771964"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.360699 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" event={"ID":"9bbcc262-c708-4bd4-81c3-7bcbb485ddad","Type":"ContainerStarted","Data":"00560825fc359582b28971b6cc7055c23ec30c5d0094bb55deb921cda1d339e3"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.367085 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.373606 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" event={"ID":"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0","Type":"ContainerStarted","Data":"4edc1e4b12cbd8f15b31e7aff3054d3cab62af80713afce50861a45331ce5904"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.373641 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" event={"ID":"96cce4f5-80bd-4c5a-a6cb-2037ccb543a0","Type":"ContainerStarted","Data":"1edc20757f5db75bbacfe9ddac98d8e635c98d36e8d5b63b25cdb11beab8b953"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.379194 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.388700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" event={"ID":"a244a2e3-02bd-40e3-ad89-36de630bf3a8","Type":"ContainerStarted","Data":"77bb9eecba967f97484df4e065cdf39c2ad1c4db1bb005cf3200b2462d2a3cfd"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.388736 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" event={"ID":"a244a2e3-02bd-40e3-ad89-36de630bf3a8","Type":"ContainerStarted","Data":"d5c51fd39cbd8388f5bc1959c0e0dbe03249d2cd5ab65728a862dbbc9ead16a8"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.394269 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" event={"ID":"7f7e0542-cc87-4c79-8d0f-540577bb44e4","Type":"ContainerStarted","Data":"50098ad988c3e4ab6fb488c00ceb803af68f94c279dff624673f03e89480f06e"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.394302 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" event={"ID":"7f7e0542-cc87-4c79-8d0f-540577bb44e4","Type":"ContainerStarted","Data":"2eb7cb5bef0b646c04cabd191b2ad0b5bcc6ff2a6d2a5ed9cc521d806a61ef36"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.396507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" event={"ID":"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede","Type":"ContainerStarted","Data":"24321fb73b0c08766872ad6173e3ccc451d9261584dafe417c6618e30c087770"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.396543 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" event={"ID":"dba5aa8f-e4aa-4bb0-b8d4-dac193fa7ede","Type":"ContainerStarted","Data":"04ee3de3f789152184e3e2fc1ff1c9f293d88a3224050ecc59160bc980d24192"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.398875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" event={"ID":"aea26078-abf2-4d33-b7ab-9b5602799fe3","Type":"ContainerStarted","Data":"47d67c621c9c9225ddebd06c142c312b8e928f2a7b96a1700f85aa5782fca837"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.400911 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" event={"ID":"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651","Type":"ContainerStarted","Data":"76b1a0ee96d9a2e8672e4b88e9eed671c375eab209c226a32f9cb09bb063ed22"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.421339 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.423886 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.923851822 +0000 UTC m=+144.215550143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.428277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" event={"ID":"26565b8e-93a3-4682-8c20-ee6cb2319543","Type":"ContainerStarted","Data":"725b1aeb35c64519353ae002bc920fe0faf9670d8b1943dce2274dd29ae52020"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.428504 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.430150 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.430585 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:08.930568394 +0000 UTC m=+144.222266765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.436133 4793 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2bgzw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.436176 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" podUID="26565b8e-93a3-4682-8c20-ee6cb2319543" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.456813 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" event={"ID":"c168c730-aa75-43dd-8590-857f3d711391","Type":"ContainerStarted","Data":"d16e0ed4d05370259a3f6491a955806d9bc0f7a68685080ce542942864b7305a"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.456850 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" event={"ID":"c168c730-aa75-43dd-8590-857f3d711391","Type":"ContainerStarted","Data":"85603b549e4bd879666d67d9bfa3de9f08ae9517dde69fa41c2fb2cb098b3938"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.456861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" event={"ID":"c168c730-aa75-43dd-8590-857f3d711391","Type":"ContainerStarted","Data":"8093f74c90ec21bd4e093787989e70d85b0cc0dcb72c530abc56942635d7ec20"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.460886 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" event={"ID":"cceb72ff-1234-4c76-9387-a3b9250d727a","Type":"ContainerStarted","Data":"7213b75ff86e6f5b0e56b1baae5ad52eb19be1091ac282406dc2e86e2fdf46f1"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.460940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" event={"ID":"cceb72ff-1234-4c76-9387-a3b9250d727a","Type":"ContainerStarted","Data":"28d4a7f0bb7e6c4d73c5cb9f1b3668d426bf700139424e3a6a1e88c66de912c8"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.466672 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99gxb"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.473984 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" event={"ID":"881cfa9d-a35a-4088-8a39-b4ecd52a3b37","Type":"ContainerStarted","Data":"bf868f25fb80555ab1ff82c6541b98aa057a5960788c3852e73f123a35b6a6f9"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.476907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" event={"ID":"881cfa9d-a35a-4088-8a39-b4ecd52a3b37","Type":"ContainerStarted","Data":"adacf15258fa24435ce73e9459861e777aa3dc3697d9260e765a8ed3699c2b07"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.515839 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4qdjn"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.531670 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.533171 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.033151708 +0000 UTC m=+144.324850019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.547987 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4wb4w"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.557903 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.561324 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-87g2m"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.568649 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hc4b6"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.581494 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" event={"ID":"53fc14f9-23dc-4927-812f-507f9525b2f8","Type":"ContainerStarted","Data":"a532aa9900f02500e9136a31cd5cded7eae9d0ce9745ab5eb6a147c654424db9"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.596967 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed3b6732-f761-4d0a-9697-03c81282c267" containerID="cdb2ab9dedd9ffc6d4f441e4c927e90ca40868951e379b945668aed212a4a7c0" exitCode=0 Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.597027 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" event={"ID":"ed3b6732-f761-4d0a-9697-03c81282c267","Type":"ContainerDied","Data":"cdb2ab9dedd9ffc6d4f441e4c927e90ca40868951e379b945668aed212a4a7c0"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.597050 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" event={"ID":"ed3b6732-f761-4d0a-9697-03c81282c267","Type":"ContainerStarted","Data":"34107cddf156f8b8549f5f09419653e17534b08a7681539f5d613ca24feae865"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.601512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" event={"ID":"3da29805-de08-447c-91da-61be0314e49b","Type":"ContainerStarted","Data":"6cef9c0c07d92a07d57e177e1adbd443dbabb415f65b8e5c5db87441d3af87b5"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.606015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-br8vj" event={"ID":"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c","Type":"ContainerStarted","Data":"6780f18ba40a8c3066c5a351a7776d6ae3ecd302d022c6f2564b79dc77ea9a86"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.614924 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.616233 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.635017 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.635316 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.135303492 +0000 UTC m=+144.427001803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.664039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" event={"ID":"0537500b-7cf2-4b68-b0d9-84dff1992304","Type":"ContainerStarted","Data":"276b82ad0e3da947a891607794af2ac547a58399e9be5859d1966afbf3824963"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.664321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" event={"ID":"0537500b-7cf2-4b68-b0d9-84dff1992304","Type":"ContainerStarted","Data":"74a83eb04ccfd88140fa5518916ce5ddb9d6db3aa2b062bc57dc81545c1dcc73"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.664936 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.668774 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-knrjc" event={"ID":"0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8","Type":"ContainerStarted","Data":"a35f298b3259be23d2733f50d5e4d20a33c0de2dca7f078d4d03d51dd9be78ea"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.674969 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" event={"ID":"2f13644d-b44d-450c-ac22-88e8a8c6e41d","Type":"ContainerStarted","Data":"4dee58ec3405b6acfba1cc4a54215ebea60d030c95645c5c5798a556bb5b96c2"} Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.679575 4793 patch_prober.go:28] interesting pod/console-operator-58897d9998-8rb9m container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.680029 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" podUID="0537500b-7cf2-4b68-b0d9-84dff1992304" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 17 20:11:08 crc kubenswrapper[4793]: W0217 20:11:08.692501 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcb373df_1dcb_4517_b9a8_0d3e548c444a.slice/crio-b35e17e0cd102706b7d317a782ac7d832268dcdfb5ee0cd221a2e3cd7090b960 WatchSource:0}: Error finding container b35e17e0cd102706b7d317a782ac7d832268dcdfb5ee0cd221a2e3cd7090b960: Status 404 returned error can't find the container with id b35e17e0cd102706b7d317a782ac7d832268dcdfb5ee0cd221a2e3cd7090b960 Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.695304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-chhz9"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.697146 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w"] Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.736347 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.236318389 +0000 UTC m=+144.528016700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.739294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.739452 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.740178 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.740649 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.240637162 +0000 UTC m=+144.532335473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.842909 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.846784 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.346727131 +0000 UTC m=+144.638425472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.947351 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:08 crc kubenswrapper[4793]: E0217 20:11:08.947755 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.447743057 +0000 UTC m=+144.739441368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.953711 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n"] Feb 17 20:11:08 crc kubenswrapper[4793]: I0217 20:11:08.955925 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tktcd"] Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.049184 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.049453 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.54940238 +0000 UTC m=+144.841100701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.049599 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.052307 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.552294779 +0000 UTC m=+144.843993160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.092655 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qphb2"] Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.107353 4793 csr.go:261] certificate signing request csr-75vkw is approved, waiting to be issued Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.117453 4793 csr.go:257] certificate signing request csr-75vkw is issued Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.136898 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7"] Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.150316 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.150606 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.65058138 +0000 UTC m=+144.942279691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.251969 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.254349 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.754313002 +0000 UTC m=+145.046011313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.355441 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.355587 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.855567555 +0000 UTC m=+145.147265876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.360341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.360746 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.860731369 +0000 UTC m=+145.152429680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.365469 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6c8j" podStartSLOduration=121.365448642 podStartE2EDuration="2m1.365448642s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.363082635 +0000 UTC m=+144.654780946" watchObservedRunningTime="2026-02-17 20:11:09.365448642 +0000 UTC m=+144.657146953" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.459225 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" podStartSLOduration=121.459199444 podStartE2EDuration="2m1.459199444s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.453244761 +0000 UTC m=+144.744943082" watchObservedRunningTime="2026-02-17 20:11:09.459199444 +0000 UTC m=+144.750897755" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.461561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.462039 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:09.962010542 +0000 UTC m=+145.253708853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.520516 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zmnqw" podStartSLOduration=121.520501257 podStartE2EDuration="2m1.520501257s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.520140608 +0000 UTC m=+144.811838919" watchObservedRunningTime="2026-02-17 20:11:09.520501257 +0000 UTC m=+144.812199568" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.584449 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.585268 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.085252162 +0000 UTC m=+145.376950493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.606877 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-hv6t4" podStartSLOduration=121.606860321 podStartE2EDuration="2m1.606860321s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.605865707 +0000 UTC m=+144.897564018" watchObservedRunningTime="2026-02-17 20:11:09.606860321 +0000 UTC m=+144.898558642" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.648849 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kxxq7" podStartSLOduration=121.648834369 podStartE2EDuration="2m1.648834369s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.646324099 +0000 UTC m=+144.938022430" watchObservedRunningTime="2026-02-17 20:11:09.648834369 +0000 UTC m=+144.940532680" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.692126 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.692360 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.192338665 +0000 UTC m=+145.484036976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.692479 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.692890 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.192880088 +0000 UTC m=+145.484578399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.723387 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" event={"ID":"934df675-8026-4579-97a6-c8e843581407","Type":"ContainerStarted","Data":"d729f6a968d2ebdf530ff263b51bd8838849bea568736ad9afd56474b4106626"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.723439 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" event={"ID":"934df675-8026-4579-97a6-c8e843581407","Type":"ContainerStarted","Data":"6b783089a7036533346717c8a4a51ebce6ae445da118c5e695827a03477a7431"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.730814 4793 generic.go:334] "Generic (PLEG): container finished" podID="0cdcfee9-95e5-4030-b6e1-d0dac0bc9651" containerID="b7fa45806e64e89d5442a2797f4eac18d7314791a6c01cd3e7effa72e3cee553" exitCode=0 Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.730940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" event={"ID":"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651","Type":"ContainerDied","Data":"b7fa45806e64e89d5442a2797f4eac18d7314791a6c01cd3e7effa72e3cee553"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.734866 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" podStartSLOduration=121.734847896 podStartE2EDuration="2m1.734847896s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.733970485 +0000 UTC m=+145.025668796" watchObservedRunningTime="2026-02-17 20:11:09.734847896 +0000 UTC m=+145.026546207" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.749875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" event={"ID":"9bbcc262-c708-4bd4-81c3-7bcbb485ddad","Type":"ContainerStarted","Data":"6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.750663 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.755043 4793 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qpst4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.755088 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" podUID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.764295 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" event={"ID":"01569bd6-0c03-4cb4-ac0a-897db7189161","Type":"ContainerStarted","Data":"01d249f4107a4fba14c25b95df1f572e9442dd49a0503dc92a04b76c83c5a031"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.776264 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" event={"ID":"34a020ed-794d-4b17-a8a4-a761c7fcc50d","Type":"ContainerStarted","Data":"5b6f1bde3cef89ff8333d9195cb07c6a785e7fe730d6cb7ed17f6f990c693dde"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.786241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" event={"ID":"080011e4-f1f0-46eb-81f0-64f82dd6e8e1","Type":"ContainerStarted","Data":"dd19d760bbda942c7a38440c2414d11e9427b1b1c57873476868bd0678100af8"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.796140 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.796352 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.296331213 +0000 UTC m=+145.588029524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.796529 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.798195 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.298163977 +0000 UTC m=+145.589862388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.798591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" event={"ID":"2f46bb13-0fc5-4acf-8f35-45b134bf590c","Type":"ContainerStarted","Data":"35f2afff59c808826f05d9d825254e63fd4f8d407364ecfbcf43c7590e2cd0e2"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.802481 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" event={"ID":"aea26078-abf2-4d33-b7ab-9b5602799fe3","Type":"ContainerStarted","Data":"62c044f751cec05d546c8fc1f052b544035a8ff595303485113ae3525268c8c5"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.813186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" event={"ID":"c576e65d-f7b1-46f3-976b-f4726ee1ddf3","Type":"ContainerStarted","Data":"3ce4c93c0c76102ab958a36381c8fbc5eeb3a54b83ab7205f8e93eebd1f4045b"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.830779 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" event={"ID":"ed3b6732-f761-4d0a-9697-03c81282c267","Type":"ContainerStarted","Data":"c0ce63a720bf4f9f7b377862217849a447d10a5a304e21d2f03cf9773c8a50bf"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.842637 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" event={"ID":"ff663e88-1dd8-4094-b281-7c995a83178f","Type":"ContainerStarted","Data":"8b4ae67433e827d5d80dc515314b7a108ee1dfc29df38e80f70e26315adf7e0a"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.868219 4793 generic.go:334] "Generic (PLEG): container finished" podID="7f7e0542-cc87-4c79-8d0f-540577bb44e4" containerID="50098ad988c3e4ab6fb488c00ceb803af68f94c279dff624673f03e89480f06e" exitCode=0 Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.868315 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" event={"ID":"7f7e0542-cc87-4c79-8d0f-540577bb44e4","Type":"ContainerDied","Data":"50098ad988c3e4ab6fb488c00ceb803af68f94c279dff624673f03e89480f06e"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.880120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" event={"ID":"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec","Type":"ContainerStarted","Data":"307a3716ca85e079378c7d043960024e353a6ce914693ac119638631b29b37f2"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.880542 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" event={"ID":"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec","Type":"ContainerStarted","Data":"322d3dfd9fd409db692f8e90311925074602258e30c51ca6635037a4dab0cfe6"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.899051 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:09 crc kubenswrapper[4793]: E0217 20:11:09.899991 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.399970611 +0000 UTC m=+145.691668922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.910322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4qdjn" event={"ID":"54895b59-e973-4dcf-90eb-f7660227bd6e","Type":"ContainerStarted","Data":"1a5a49e89ea07224c134c15069f63d9c13073123f8f1692a130c436a0c64ae85"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.910368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4qdjn" event={"ID":"54895b59-e973-4dcf-90eb-f7660227bd6e","Type":"ContainerStarted","Data":"5679fcbbcf15d0e279bf29415176dbb3646dc4557e85014b751bf2e52afcc1b0"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.929870 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" event={"ID":"26565b8e-93a3-4682-8c20-ee6cb2319543","Type":"ContainerStarted","Data":"80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.942930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" event={"ID":"82552c6f-59af-4d20-97ca-82384997434e","Type":"ContainerStarted","Data":"2d5c8abc676df6c99749c8d4d08bb36673b9ee955a6d1ad588580b8068f05b80"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.942975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" event={"ID":"82552c6f-59af-4d20-97ca-82384997434e","Type":"ContainerStarted","Data":"2520b1109be9686aed2edb884174298cc4388fcaa1884fb3cb47651c2b8508c7"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.948981 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" event={"ID":"83495f96-c3b0-4871-876c-07832519e1d8","Type":"ContainerStarted","Data":"a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.949027 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" event={"ID":"83495f96-c3b0-4871-876c-07832519e1d8","Type":"ContainerStarted","Data":"101d5b62c4a322c7fcd74022427e3ea204614680f9fde746530c578ebf07ee24"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.949787 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.964819 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" event={"ID":"dff7e56c-8f87-477c-b1a5-908263f5fca5","Type":"ContainerStarted","Data":"99d324ca299debe8740a66a1521f46b70d20fdf08187e4eeffebd0bdc555b15c"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.972583 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cspqc" event={"ID":"16bf7ecb-6485-4941-9d64-181dff5a3da1","Type":"ContainerStarted","Data":"bd394730eea4a0162ec4cbdf36d25689970033847931fb398d7d56ecbbd8e703"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.975945 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-99gxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.975998 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.983952 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" event={"ID":"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65","Type":"ContainerStarted","Data":"02136e12d234e139df4bf42a84059c9ff4ed39cab60f94af2c0c20e0f0eac4c6"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.995471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" event={"ID":"2d1fec83-8d01-446f-80f1-2c5f815f44bc","Type":"ContainerStarted","Data":"26821c1f3bd90d2547890bff8ded7c5844494cb0db61522e927cb3df082ff4b3"} Feb 17 20:11:09 crc kubenswrapper[4793]: I0217 20:11:09.999210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" event={"ID":"2f13644d-b44d-450c-ac22-88e8a8c6e41d","Type":"ContainerStarted","Data":"a4b70aee74d002df238ba0a98898de73cb1ca95826116f3daf08307246a4f2e9"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.000190 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.001478 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.50146744 +0000 UTC m=+145.793165741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.011195 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.014861 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qgxk6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.014899 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.015396 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qcpml" event={"ID":"32dbc133-34ed-449e-a397-ff3f0b83418c","Type":"ContainerStarted","Data":"bf4e432c47dbfa26b1b33f171b33b1be99dc819d7d0e87e8a2423a97417e84cb"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.016915 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" event={"ID":"dcb373df-1dcb-4517-b9a8-0d3e548c444a","Type":"ContainerStarted","Data":"b35e17e0cd102706b7d317a782ac7d832268dcdfb5ee0cd221a2e3cd7090b960"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.026611 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" event={"ID":"881cfa9d-a35a-4088-8a39-b4ecd52a3b37","Type":"ContainerStarted","Data":"90365f2f8775d63424093a290c5c043b7e4801b0fde93361bcee1bde1be021fc"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.030159 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tpcvd" podStartSLOduration=122.030143708 podStartE2EDuration="2m2.030143708s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:09.971189462 +0000 UTC m=+145.262887773" watchObservedRunningTime="2026-02-17 20:11:10.030143708 +0000 UTC m=+145.321842019" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.039720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" event={"ID":"7303fd77-01cc-4e85-94f9-26bee2290651","Type":"ContainerStarted","Data":"48d45831ee37f1df278c22b543b8f1593973bd46ac3c93750798e9766c29b69a"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.039758 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" event={"ID":"7303fd77-01cc-4e85-94f9-26bee2290651","Type":"ContainerStarted","Data":"d5f863e67e328e80cceb0eed050d700981df6de5dae111e87a6ff03615ae06e6"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.047948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" event={"ID":"d7f7e3c2-0748-4893-afaf-ca285648f7c0","Type":"ContainerStarted","Data":"6b206d350f5951e1f1202670d886dcb9d85359ac0a78262fe9126106f947f841"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.047993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" event={"ID":"d7f7e3c2-0748-4893-afaf-ca285648f7c0","Type":"ContainerStarted","Data":"af783e0688b1118306405489ab4c5b3bceaa04108435f26729ea84de8f38f96a"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.067357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" event={"ID":"3da29805-de08-447c-91da-61be0314e49b","Type":"ContainerStarted","Data":"4c3dc72ed3ee323e0b25ab2e67a5fc3f4e831a6218ae168faf0ea5fd58d73ffa"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.105230 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.107710 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.607670191 +0000 UTC m=+145.899368572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.121106 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 20:06:09 +0000 UTC, rotation deadline is 2026-11-04 12:52:07.777800335 +0000 UTC Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.121164 4793 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6232h40m57.65663904s for next certificate rotation Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.122236 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" event={"ID":"35681a96-27de-42f9-99a1-bf999199ecec","Type":"ContainerStarted","Data":"b719d2f2b87eb6f7f0f885ee25d201329838cf1f7395031914b0113b384730df"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.151289 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" podStartSLOduration=121.151272428 podStartE2EDuration="2m1.151272428s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.151023512 +0000 UTC m=+145.442721823" watchObservedRunningTime="2026-02-17 20:11:10.151272428 +0000 UTC m=+145.442970739" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.160089 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-knrjc" event={"ID":"0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8","Type":"ContainerStarted","Data":"090c50efd0ec19cb8bba3cc2c3ff00569465c6afc7dfa69344b6f4744e8df1c3"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.161571 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.170267 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.170324 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.174182 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qphb2" event={"ID":"780a5995-cbd8-45ae-a25b-c1d4b2190542","Type":"ContainerStarted","Data":"a1a992eca2d44e821102dc841c0e3e8455a3c55439a12dfe987420c9420c25d6"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.193408 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" podStartSLOduration=121.19337412 podStartE2EDuration="2m1.19337412s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.188232366 +0000 UTC m=+145.479930687" watchObservedRunningTime="2026-02-17 20:11:10.19337412 +0000 UTC m=+145.485072431" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.197466 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-br8vj" event={"ID":"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c","Type":"ContainerStarted","Data":"329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.214572 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.215679 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.715667355 +0000 UTC m=+146.007365666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.240852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" event={"ID":"97c744eb-5350-407d-8661-5d20a87110d5","Type":"ContainerStarted","Data":"644cafc2e0cdbe8628017e9f785ded8fd4c6f74b051dc20750570ac035fc0ad1"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.240907 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.240916 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" event={"ID":"97c744eb-5350-407d-8661-5d20a87110d5","Type":"ContainerStarted","Data":"3c80f875774d4cb1721ffea05f6c7890dffe649f180f671a58fe3a8a8c0b264f"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.264172 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" podStartSLOduration=121.26413967 podStartE2EDuration="2m1.26413967s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.213843251 +0000 UTC m=+145.505541562" watchObservedRunningTime="2026-02-17 20:11:10.26413967 +0000 UTC m=+145.555837981" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.264377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" event={"ID":"53fc14f9-23dc-4927-812f-507f9525b2f8","Type":"ContainerStarted","Data":"9eb699e84a440d61ce7c293ec9d0306355366a1288e8fd3041d307678da7cb31"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.265099 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.267520 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qcpml" podStartSLOduration=122.267509641 podStartE2EDuration="2m2.267509641s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.266293412 +0000 UTC m=+145.557991723" watchObservedRunningTime="2026-02-17 20:11:10.267509641 +0000 UTC m=+145.559207952" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.283134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" event={"ID":"d43c3902-fd03-4c2c-a678-988b45dd62d0","Type":"ContainerStarted","Data":"da203ac259f9d2deee2a8dd5a96b4388a71856197c4cb2ef4b11dac4b140c04c"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.283179 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" event={"ID":"d43c3902-fd03-4c2c-a678-988b45dd62d0","Type":"ContainerStarted","Data":"8c6858de4594970dd1bdb10fbb81cbacb00e363d1a0a65e59e76576db0a4758e"} Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.283197 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.291007 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fmmf6" podStartSLOduration=121.290992055 podStartE2EDuration="2m1.290992055s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.290506273 +0000 UTC m=+145.582204594" watchObservedRunningTime="2026-02-17 20:11:10.290992055 +0000 UTC m=+145.582690366" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.302063 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8rb9m" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.306891 4793 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d5mb8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.307134 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" podUID="d43c3902-fd03-4c2c-a678-988b45dd62d0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.307052 4793 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g69jb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.307206 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" podUID="53fc14f9-23dc-4927-812f-507f9525b2f8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.315866 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.320211 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.820192786 +0000 UTC m=+146.111891087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.334447 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" podStartSLOduration=121.334430018 podStartE2EDuration="2m1.334430018s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.33325334 +0000 UTC m=+145.624951651" watchObservedRunningTime="2026-02-17 20:11:10.334430018 +0000 UTC m=+145.626128329" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.423477 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.423802 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:10.923789335 +0000 UTC m=+146.215487646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.460041 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lzd99" podStartSLOduration=121.460028105 podStartE2EDuration="2m1.460028105s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.458106839 +0000 UTC m=+145.749805150" watchObservedRunningTime="2026-02-17 20:11:10.460028105 +0000 UTC m=+145.751726416" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.524031 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.524521 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.024508074 +0000 UTC m=+146.316206385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.537064 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9p2hl" podStartSLOduration=121.537048556 podStartE2EDuration="2m1.537048556s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.536139324 +0000 UTC m=+145.827837635" watchObservedRunningTime="2026-02-17 20:11:10.537048556 +0000 UTC m=+145.828746867" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.595211 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" podStartSLOduration=122.595187762 podStartE2EDuration="2m2.595187762s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.587375455 +0000 UTC m=+145.879073766" watchObservedRunningTime="2026-02-17 20:11:10.595187762 +0000 UTC m=+145.886886083" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.625192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.625565 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.125552202 +0000 UTC m=+146.417250513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.660127 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" podStartSLOduration=121.660110722 podStartE2EDuration="2m1.660110722s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.625446589 +0000 UTC m=+145.917144900" watchObservedRunningTime="2026-02-17 20:11:10.660110722 +0000 UTC m=+145.951809033" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.663834 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-58vqk" podStartSLOduration=121.66381934099999 podStartE2EDuration="2m1.663819341s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.661923346 +0000 UTC m=+145.953621657" watchObservedRunningTime="2026-02-17 20:11:10.663819341 +0000 UTC m=+145.955517652" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.722940 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" podStartSLOduration=122.722924821 podStartE2EDuration="2m2.722924821s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.722239854 +0000 UTC m=+146.013938165" watchObservedRunningTime="2026-02-17 20:11:10.722924821 +0000 UTC m=+146.014623132" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.726064 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.726410 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.226394794 +0000 UTC m=+146.518093105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.728833 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.749091 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4qdjn" podStartSLOduration=6.749075039 podStartE2EDuration="6.749075039s" podCreationTimestamp="2026-02-17 20:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.747133592 +0000 UTC m=+146.038831903" watchObservedRunningTime="2026-02-17 20:11:10.749075039 +0000 UTC m=+146.040773350" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.807441 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4fgc" podStartSLOduration=121.807422731 podStartE2EDuration="2m1.807422731s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.796662752 +0000 UTC m=+146.088361063" watchObservedRunningTime="2026-02-17 20:11:10.807422731 +0000 UTC m=+146.099121052" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.827641 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.828135 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.328120348 +0000 UTC m=+146.619818659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.842240 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b9fhb" podStartSLOduration=122.842222337 podStartE2EDuration="2m2.842222337s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.840098966 +0000 UTC m=+146.131797267" watchObservedRunningTime="2026-02-17 20:11:10.842222337 +0000 UTC m=+146.133920648" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.862326 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cspqc" podStartSLOduration=6.862309259 podStartE2EDuration="6.862309259s" podCreationTimestamp="2026-02-17 20:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.861059579 +0000 UTC m=+146.152757890" watchObservedRunningTime="2026-02-17 20:11:10.862309259 +0000 UTC m=+146.154007570" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.906413 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.911250 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:10 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:10 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:10 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.911284 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.929127 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-br8vj" podStartSLOduration=122.929106134 podStartE2EDuration="2m2.929106134s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.901804688 +0000 UTC m=+146.193502999" watchObservedRunningTime="2026-02-17 20:11:10.929106134 +0000 UTC m=+146.220804445" Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.929443 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:10 crc kubenswrapper[4793]: E0217 20:11:10.929829 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.429812371 +0000 UTC m=+146.721510682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:10 crc kubenswrapper[4793]: I0217 20:11:10.930567 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-knrjc" podStartSLOduration=122.930559679 podStartE2EDuration="2m2.930559679s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:10.929019412 +0000 UTC m=+146.220717723" watchObservedRunningTime="2026-02-17 20:11:10.930559679 +0000 UTC m=+146.222257990" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.011618 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" podStartSLOduration=122.011603656 podStartE2EDuration="2m2.011603656s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.011010691 +0000 UTC m=+146.302709002" watchObservedRunningTime="2026-02-17 20:11:11.011603656 +0000 UTC m=+146.303301967" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.033377 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.033709 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.533682846 +0000 UTC m=+146.825381157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.054217 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" podStartSLOduration=122.054197559 podStartE2EDuration="2m2.054197559s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.05216328 +0000 UTC m=+146.343861591" watchObservedRunningTime="2026-02-17 20:11:11.054197559 +0000 UTC m=+146.345895870" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.137183 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.137509 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.63749353 +0000 UTC m=+146.929191841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.241634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.241940 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.741928099 +0000 UTC m=+147.033626410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.295271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" event={"ID":"35681a96-27de-42f9-99a1-bf999199ecec","Type":"ContainerStarted","Data":"6ec5ed3db87b3588c741d64b39afd293d231661388ff2cfd3d4a94659428d70d"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.295628 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" event={"ID":"35681a96-27de-42f9-99a1-bf999199ecec","Type":"ContainerStarted","Data":"6a1f95da74db82358c9a73b732c7e486b7c50a7ea7bd6790b90bad13c08c62cf"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.297090 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" event={"ID":"7e58c09e-0c62-49d9-bd2c-8a9c6e1382ec","Type":"ContainerStarted","Data":"e4b293c59a814746a60b9fe93db46ddbc642250fea7bdf084ade7cf06da74ae6"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.298499 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x9z7w" event={"ID":"6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65","Type":"ContainerStarted","Data":"8c2d79141e35a306ec6c56127ab024318c52d77d8b1d6dbeaad0d5dc3deff03b"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.299844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-87g2m" event={"ID":"34a020ed-794d-4b17-a8a4-a761c7fcc50d","Type":"ContainerStarted","Data":"83d628d7d51ed57f4abdb1040cd703eb78e68abbdbc2d7bf0a2219aa5d6e47b9"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.305506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" event={"ID":"7f7e0542-cc87-4c79-8d0f-540577bb44e4","Type":"ContainerStarted","Data":"64395da63ef77dac8e0692b0f0f2495b76ade3d5e744f18b7bc41222e2cad7dc"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.305660 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.307136 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" event={"ID":"080011e4-f1f0-46eb-81f0-64f82dd6e8e1","Type":"ContainerStarted","Data":"6c7b4a406fb86f8976d7a527e7fb42bd5c1d149bcc31229cc430673b85bf9db1"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.307165 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" event={"ID":"080011e4-f1f0-46eb-81f0-64f82dd6e8e1","Type":"ContainerStarted","Data":"adc8d7e294c6f564bf9a3d68696154a4a5c25fccaebf7f9114cac0ef15ccb018"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.308416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cspqc" event={"ID":"16bf7ecb-6485-4941-9d64-181dff5a3da1","Type":"ContainerStarted","Data":"35aa615a6564310451dcc3447ec2864d31337bd750449ebe3abaf03ce0487140"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.309888 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" event={"ID":"dcb373df-1dcb-4517-b9a8-0d3e548c444a","Type":"ContainerStarted","Data":"43bc3c88fb3c655e745f92b7da92cdcf9d0b7e31b0ca7240c35154b2190346f7"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.309913 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" event={"ID":"dcb373df-1dcb-4517-b9a8-0d3e548c444a","Type":"ContainerStarted","Data":"aad9433cda07f0cacc150904711e70cc392712c9d71b396af03f55391b428350"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.311484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" event={"ID":"97c744eb-5350-407d-8661-5d20a87110d5","Type":"ContainerStarted","Data":"889d64ef7a2686f497dcb9a5e3573fdd1fc7b281a6c901bd5a32ec2a28de7dcf"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.313170 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" event={"ID":"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651","Type":"ContainerStarted","Data":"bd3f6bda05bdac6971bfdff3df03d7a119adf88b0c02a4945748e98344e345bf"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.314680 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qphb2" event={"ID":"780a5995-cbd8-45ae-a25b-c1d4b2190542","Type":"ContainerStarted","Data":"dc6091869b295cd91229b1fd5df77e3d04269c53af83363cedac8c1c163cb51f"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.314724 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qphb2" event={"ID":"780a5995-cbd8-45ae-a25b-c1d4b2190542","Type":"ContainerStarted","Data":"0568a9e3d80c14043901d3ace50b40e3eecb58236ba5b035cfb5de2fadc3b7ea"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.314837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.316336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" event={"ID":"2d1fec83-8d01-446f-80f1-2c5f815f44bc","Type":"ContainerStarted","Data":"3edac6470a53e2fbf7bb8bdedcf7ffda8ad6f2a89b956ee37f23a676b9c54e7e"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.317802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qcpml" event={"ID":"32dbc133-34ed-449e-a397-ff3f0b83418c","Type":"ContainerStarted","Data":"7aded013b280260d1f5f5330737b6b1d7327dc6d24f8e71f71f00f29e6245e71"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.319229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" event={"ID":"dff7e56c-8f87-477c-b1a5-908263f5fca5","Type":"ContainerStarted","Data":"d4d9964d4fcd3079f772f540045e24a04ade1846828f5b2ddc8eff0c32bf93f2"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.319363 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.321247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" event={"ID":"3da29805-de08-447c-91da-61be0314e49b","Type":"ContainerStarted","Data":"302f22bdb5101c817303f3d9aa58ac5544934cea4b919af98e21403dec32f974"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.323115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" event={"ID":"ff663e88-1dd8-4094-b281-7c995a83178f","Type":"ContainerStarted","Data":"272221bfc5e35db273d72458a9e7d83fa329756105a6eca82206b9c5ad42de07"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.324740 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" event={"ID":"2f46bb13-0fc5-4acf-8f35-45b134bf590c","Type":"ContainerStarted","Data":"7886f3af4ea6c1844d2f2a1adfa69234d561118d0793ea323a9b7b1e524ad6c9"} Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.325484 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.325525 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.325640 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-99gxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.325752 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.330028 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.332878 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.334216 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g69jb" podStartSLOduration=122.334204585 podStartE2EDuration="2m2.334204585s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.149890108 +0000 UTC m=+146.441588419" watchObservedRunningTime="2026-02-17 20:11:11.334204585 +0000 UTC m=+146.625902896" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.336497 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s6t4n" podStartSLOduration=122.33649069 podStartE2EDuration="2m2.33649069s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.333937729 +0000 UTC m=+146.625636040" watchObservedRunningTime="2026-02-17 20:11:11.33649069 +0000 UTC m=+146.628189001" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.342956 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.344143 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.844127564 +0000 UTC m=+147.135825875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.370975 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.410841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.418723 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-chhz9" podStartSLOduration=123.418710475 podStartE2EDuration="2m3.418710475s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.390755394 +0000 UTC m=+146.682453705" watchObservedRunningTime="2026-02-17 20:11:11.418710475 +0000 UTC m=+146.710408786" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.450041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.454108 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:11.954093305 +0000 UTC m=+147.245791616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.488054 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tj8df" podStartSLOduration=123.488038911 podStartE2EDuration="2m3.488038911s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.419250428 +0000 UTC m=+146.710948739" watchObservedRunningTime="2026-02-17 20:11:11.488038911 +0000 UTC m=+146.779737222" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.500601 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" podStartSLOduration=123.500583282 podStartE2EDuration="2m3.500583282s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.4875658 +0000 UTC m=+146.779264111" watchObservedRunningTime="2026-02-17 20:11:11.500583282 +0000 UTC m=+146.792281593" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.529653 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-lltb7" podStartSLOduration=122.52963741 podStartE2EDuration="2m2.52963741s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.525001749 +0000 UTC m=+146.816700060" watchObservedRunningTime="2026-02-17 20:11:11.52963741 +0000 UTC m=+146.821335711" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.551030 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.551149 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.051129166 +0000 UTC m=+147.342827477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.551446 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.551875 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.051864624 +0000 UTC m=+147.343562935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.564047 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qphb2" podStartSLOduration=7.5640315860000005 podStartE2EDuration="7.564031586s" podCreationTimestamp="2026-02-17 20:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.561823013 +0000 UTC m=+146.853521324" watchObservedRunningTime="2026-02-17 20:11:11.564031586 +0000 UTC m=+146.855729897" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.623461 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hc4b6" podStartSLOduration=122.623441574 podStartE2EDuration="2m2.623441574s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.614315054 +0000 UTC m=+146.906013365" watchObservedRunningTime="2026-02-17 20:11:11.623441574 +0000 UTC m=+146.915139895" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.654137 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.654468 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.154453509 +0000 UTC m=+147.446151820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.737174 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5mb8" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.755537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.755969 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.255953207 +0000 UTC m=+147.547651518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.856745 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.857109 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.357085566 +0000 UTC m=+147.648783877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.902425 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:11 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:11 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:11 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.902500 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.909780 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.909838 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.918452 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:11 crc kubenswrapper[4793]: I0217 20:11:11.958800 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:11 crc kubenswrapper[4793]: E0217 20:11:11.959229 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.45921309 +0000 UTC m=+147.750911401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.026768 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5p2q" podStartSLOduration=123.026749302 podStartE2EDuration="2m3.026749302s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:11.967833057 +0000 UTC m=+147.259531368" watchObservedRunningTime="2026-02-17 20:11:12.026749302 +0000 UTC m=+147.318447613" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.062287 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.062609 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.562595373 +0000 UTC m=+147.854293684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.090022 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrmm7" podStartSLOduration=123.090002392 podStartE2EDuration="2m3.090002392s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:12.082655255 +0000 UTC m=+147.374353566" watchObservedRunningTime="2026-02-17 20:11:12.090002392 +0000 UTC m=+147.381700703" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.120342 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zq7jg" podStartSLOduration=123.12032576 podStartE2EDuration="2m3.12032576s" podCreationTimestamp="2026-02-17 20:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:12.115950245 +0000 UTC m=+147.407648546" watchObservedRunningTime="2026-02-17 20:11:12.12032576 +0000 UTC m=+147.412024071" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.163098 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.163451 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.663437976 +0000 UTC m=+147.955136297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.187130 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4wb4w" podStartSLOduration=124.187112494 podStartE2EDuration="2m4.187112494s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:12.153148219 +0000 UTC m=+147.444846530" watchObservedRunningTime="2026-02-17 20:11:12.187112494 +0000 UTC m=+147.478810805" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.264128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.264462 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.764446702 +0000 UTC m=+148.056145013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.329461 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" event={"ID":"01569bd6-0c03-4cb4-ac0a-897db7189161","Type":"ContainerStarted","Data":"ef0f8b9ba265bd5337aa583b1ea3de0ea1ad64ee2663acaac2cc1ace1a15cfb6"} Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.332211 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" event={"ID":"0cdcfee9-95e5-4030-b6e1-d0dac0bc9651","Type":"ContainerStarted","Data":"4fc5fd47a752b337647d720b45e48aa4ec537a3495a51044a551d777d20f6f18"} Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.341155 4793 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-99gxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.341196 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.341235 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.341299 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.356172 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhb75" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.365654 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.865640863 +0000 UTC m=+148.157339174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.365276 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.408119 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" podStartSLOduration=124.408098723 podStartE2EDuration="2m4.408098723s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:12.407445047 +0000 UTC m=+147.699143358" watchObservedRunningTime="2026-02-17 20:11:12.408098723 +0000 UTC m=+147.699797034" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.475270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.475664 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:12.975646016 +0000 UTC m=+148.267344327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.577329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.577633 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.077622025 +0000 UTC m=+148.369320336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.678642 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.678827 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.178801316 +0000 UTC m=+148.470499627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.679268 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.679598 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.179590965 +0000 UTC m=+148.471289276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.780111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.780263 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.280240173 +0000 UTC m=+148.571938484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.780467 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.780772 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.280765006 +0000 UTC m=+148.572463317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.881436 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.881615 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.381588578 +0000 UTC m=+148.673286889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.881767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.882034 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.382021408 +0000 UTC m=+148.673719719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.900918 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:12 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:12 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:12 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.900976 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.982967 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.983221 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.483191478 +0000 UTC m=+148.774889789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:12 crc kubenswrapper[4793]: I0217 20:11:12.983437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:12 crc kubenswrapper[4793]: E0217 20:11:12.983747 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.483735651 +0000 UTC m=+148.775433952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.084864 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.085053 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.585025405 +0000 UTC m=+148.876723716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.085122 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.085446 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.585436074 +0000 UTC m=+148.877134385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.185914 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.186019 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.68600307 +0000 UTC m=+148.977701381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.186246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.186624 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.686607085 +0000 UTC m=+148.978305476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.287618 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.287804 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.787773815 +0000 UTC m=+149.079472126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.287927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.288216 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.788201945 +0000 UTC m=+149.079900256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.316511 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nwsr4"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.317622 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.322786 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.336604 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwsr4"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.340621 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" event={"ID":"01569bd6-0c03-4cb4-ac0a-897db7189161","Type":"ContainerStarted","Data":"f603b7ac4a5f7f3ff0ecf07c5300d8f662a7de566697d73fba25570451c66285"} Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.340659 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" event={"ID":"01569bd6-0c03-4cb4-ac0a-897db7189161","Type":"ContainerStarted","Data":"52599db771c9d2144b9c4ad5fe4748bccb046b73e54fe03c2a098773064ecacd"} Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.340672 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" event={"ID":"01569bd6-0c03-4cb4-ac0a-897db7189161","Type":"ContainerStarted","Data":"faa5fbd3fee3d03975bfa66900a616cc981fe80dae00ebe70c4f4dbf2722daeb"} Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.384754 4793 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.389398 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.389737 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.889710094 +0000 UTC m=+149.181408445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.389825 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.389981 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-kube-api-access-dcv7z\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.390093 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-utilities\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.390146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.390198 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.390291 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-catalog-content\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.391926 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.891911487 +0000 UTC m=+149.183609898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.394683 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tktcd" podStartSLOduration=9.394672223 podStartE2EDuration="9.394672223s" podCreationTimestamp="2026-02-17 20:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:13.388892434 +0000 UTC m=+148.680590755" watchObservedRunningTime="2026-02-17 20:11:13.394672223 +0000 UTC m=+148.686370534" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.395751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.405617 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.459659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mr259" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.462658 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.491028 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.491532 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.491640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-kube-api-access-dcv7z\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.491800 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-utilities\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.491918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-catalog-content\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.492029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.492885 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:13.992870861 +0000 UTC m=+149.284569172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.494167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-utilities\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.494393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-catalog-content\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.497240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.497900 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.518098 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dfdp2"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.519291 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.525904 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.539583 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-kube-api-access-dcv7z\") pod \"certified-operators-nwsr4\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.552575 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfdp2"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.592926 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.593495 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 20:11:14.093478728 +0000 UTC m=+149.385177039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8l7nb" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.593808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-catalog-content\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.594001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxf4r\" (UniqueName: \"kubernetes.io/projected/6aafddcd-6479-40b9-95a0-fa07713b5068-kube-api-access-gxf4r\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.594116 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-utilities\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.631870 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.695195 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.695725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-catalog-content\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.695767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxf4r\" (UniqueName: \"kubernetes.io/projected/6aafddcd-6479-40b9-95a0-fa07713b5068-kube-api-access-gxf4r\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.695795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-utilities\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: E0217 20:11:13.695894 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 20:11:14.195853557 +0000 UTC m=+149.487551868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.696158 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-utilities\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.697036 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-catalog-content\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.714785 4793 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T20:11:13.384784166Z","Handler":null,"Name":""} Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.726447 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cpdmp"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.737984 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpdmp"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.740498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.741840 4793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.741874 4793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.747628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxf4r\" (UniqueName: \"kubernetes.io/projected/6aafddcd-6479-40b9-95a0-fa07713b5068-kube-api-access-gxf4r\") pod \"community-operators-dfdp2\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.755178 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.785459 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.797556 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.797634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s226d\" (UniqueName: \"kubernetes.io/projected/a94875ee-0253-456e-a8c8-68be4676bb88-kube-api-access-s226d\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.797703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-catalog-content\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.797763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-utilities\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.804017 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.804064 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.867972 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.900023 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-utilities\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.912899 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s226d\" (UniqueName: \"kubernetes.io/projected/a94875ee-0253-456e-a8c8-68be4676bb88-kube-api-access-s226d\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.912971 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-catalog-content\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.913348 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-catalog-content\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.900530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-utilities\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.905046 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:13 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:13 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:13 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.913619 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.926133 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ckt4v"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.927715 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.949030 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckt4v"] Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.960431 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s226d\" (UniqueName: \"kubernetes.io/projected/a94875ee-0253-456e-a8c8-68be4676bb88-kube-api-access-s226d\") pod \"certified-operators-cpdmp\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.961821 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8l7nb\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:13 crc kubenswrapper[4793]: I0217 20:11:13.989529 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.016273 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.016466 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-utilities\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.016494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zfbd\" (UniqueName: \"kubernetes.io/projected/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-kube-api-access-5zfbd\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.016519 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-catalog-content\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.048964 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.098581 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.120261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-utilities\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.120302 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zfbd\" (UniqueName: \"kubernetes.io/projected/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-kube-api-access-5zfbd\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.120325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-catalog-content\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.120771 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-catalog-content\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.120996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-utilities\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.165916 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zfbd\" (UniqueName: \"kubernetes.io/projected/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-kube-api-access-5zfbd\") pod \"community-operators-ckt4v\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.239943 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nwsr4"] Feb 17 20:11:14 crc kubenswrapper[4793]: W0217 20:11:14.268462 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2bb9d71_ebdf_4df2_a5c9_6e4389cd7019.slice/crio-0dd7e7d53db5828d546de29761bf1643868a9d97ed30155227613e2fc03eb779 WatchSource:0}: Error finding container 0dd7e7d53db5828d546de29761bf1643868a9d97ed30155227613e2fc03eb779: Status 404 returned error can't find the container with id 0dd7e7d53db5828d546de29761bf1643868a9d97ed30155227613e2fc03eb779 Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.290891 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.315612 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.316224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.337074 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.337668 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.357719 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.400235 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"11bb2806d5bdc4a718ca19dba28e32578cf4628519cac4dac8c814ff4df3e1a3"} Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.400272 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75838446ce377261e4c5c90216dad349e0030bb2d3d1d0cfcdbe4c303a361cb8"} Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.431329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1610da4d-5427-4047-aaff-2cb9abf1fce6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.431413 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1610da4d-5427-4047-aaff-2cb9abf1fce6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.444367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerStarted","Data":"0dd7e7d53db5828d546de29761bf1643868a9d97ed30155227613e2fc03eb779"} Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.535516 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1610da4d-5427-4047-aaff-2cb9abf1fce6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.535623 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1610da4d-5427-4047-aaff-2cb9abf1fce6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.536344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1610da4d-5427-4047-aaff-2cb9abf1fce6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.586867 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1610da4d-5427-4047-aaff-2cb9abf1fce6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.644622 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfdp2"] Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.682142 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.849857 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8l7nb"] Feb 17 20:11:14 crc kubenswrapper[4793]: W0217 20:11:14.866834 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdfa45f5_3f15_4f20_823d_17b08bd674d7.slice/crio-625a889e00d216fe75df42f545fef37e0cb1e41adf8feb59a4d694b3580b7f11 WatchSource:0}: Error finding container 625a889e00d216fe75df42f545fef37e0cb1e41adf8feb59a4d694b3580b7f11: Status 404 returned error can't find the container with id 625a889e00d216fe75df42f545fef37e0cb1e41adf8feb59a4d694b3580b7f11 Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.886663 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckt4v"] Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.903271 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:14 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:14 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:14 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.903312 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.905857 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpdmp"] Feb 17 20:11:14 crc kubenswrapper[4793]: W0217 20:11:14.949210 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ae048f2a23a1a16db81f8b55a1b273dd0e87d0c4e2e3d42e6c6195ac97c7cc93 WatchSource:0}: Error finding container ae048f2a23a1a16db81f8b55a1b273dd0e87d0c4e2e3d42e6c6195ac97c7cc93: Status 404 returned error can't find the container with id ae048f2a23a1a16db81f8b55a1b273dd0e87d0c4e2e3d42e6c6195ac97c7cc93 Feb 17 20:11:14 crc kubenswrapper[4793]: I0217 20:11:14.949791 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 20:11:14 crc kubenswrapper[4793]: W0217 20:11:14.951558 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94875ee_0253_456e_a8c8_68be4676bb88.slice/crio-0428207f4a3c2dfe256176d1938f384bf32d3948b13f249964b245bb010897d0 WatchSource:0}: Error finding container 0428207f4a3c2dfe256176d1938f384bf32d3948b13f249964b245bb010897d0: Status 404 returned error can't find the container with id 0428207f4a3c2dfe256176d1938f384bf32d3948b13f249964b245bb010897d0 Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.309060 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8fhps"] Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.310223 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.316608 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.335267 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fhps"] Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.349297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-catalog-content\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.349350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44v4c\" (UniqueName: \"kubernetes.io/projected/7b3a754f-730e-4e58-a6d0-fac36125b7f2-kube-api-access-44v4c\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.349507 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-utilities\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.449786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bb066f2ce985eb38878c45f9133f6075fb86bdbf806ad4393d3f0aa8456ca332"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.449832 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"abb3dc5b6b1233c84fa17e6c43cc59ac268f1fcb2a75abd001bee18e9336f0d0"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.450716 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.451208 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-catalog-content\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.451246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44v4c\" (UniqueName: \"kubernetes.io/projected/7b3a754f-730e-4e58-a6d0-fac36125b7f2-kube-api-access-44v4c\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.451305 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-utilities\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.452165 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-utilities\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.452275 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-catalog-content\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.455137 4793 generic.go:334] "Generic (PLEG): container finished" podID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerID="95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582" exitCode=0 Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.455211 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerDied","Data":"95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.456748 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.457764 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" event={"ID":"bdfa45f5-3f15-4f20-823d-17b08bd674d7","Type":"ContainerStarted","Data":"56d1ae8a8a71ba2f8bbc211db7f4b84847bebd19b8f1abc293de91298c8f2a00"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.457802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" event={"ID":"bdfa45f5-3f15-4f20-823d-17b08bd674d7","Type":"ContainerStarted","Data":"625a889e00d216fe75df42f545fef37e0cb1e41adf8feb59a4d694b3580b7f11"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.458275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.460743 4793 generic.go:334] "Generic (PLEG): container finished" podID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerID="487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b" exitCode=0 Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.460798 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfdp2" event={"ID":"6aafddcd-6479-40b9-95a0-fa07713b5068","Type":"ContainerDied","Data":"487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.460822 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfdp2" event={"ID":"6aafddcd-6479-40b9-95a0-fa07713b5068","Type":"ContainerStarted","Data":"2e605f9d39725864f89910a07baaf388dd83bfa0450b93377d9c4aaba3f7ee97"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.464368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3dade01e55c9c79cf7e55fadd0a04d2bfc1db85330eddf1a89acd51059cb4d5e"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.464411 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ae048f2a23a1a16db81f8b55a1b273dd0e87d0c4e2e3d42e6c6195ac97c7cc93"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.469530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1610da4d-5427-4047-aaff-2cb9abf1fce6","Type":"ContainerStarted","Data":"859b2b5b26f2e27c4707b55d56ccb1ce5147879aafef2b07c0ab7d9d216c3b0e"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.469574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1610da4d-5427-4047-aaff-2cb9abf1fce6","Type":"ContainerStarted","Data":"9cc0529fb3c945e892eb1c5f1930baba30d77796027f424cf2b327df8515afd8"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.475439 4793 generic.go:334] "Generic (PLEG): container finished" podID="a94875ee-0253-456e-a8c8-68be4676bb88" containerID="bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d" exitCode=0 Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.475521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerDied","Data":"bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.475947 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerStarted","Data":"0428207f4a3c2dfe256176d1938f384bf32d3948b13f249964b245bb010897d0"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.478425 4793 generic.go:334] "Generic (PLEG): container finished" podID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerID="068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509" exitCode=0 Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.478477 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerDied","Data":"068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.478512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerStarted","Data":"2f108651de56abfc35f97178e34180f791b29aaa1cf5ed2e3434bef525f80954"} Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.483516 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44v4c\" (UniqueName: \"kubernetes.io/projected/7b3a754f-730e-4e58-a6d0-fac36125b7f2-kube-api-access-44v4c\") pod \"redhat-marketplace-8fhps\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.497471 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" podStartSLOduration=127.497451416 podStartE2EDuration="2m7.497451416s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:15.495747285 +0000 UTC m=+150.787445596" watchObservedRunningTime="2026-02-17 20:11:15.497451416 +0000 UTC m=+150.789149727" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.548502 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.609386 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.609367765 podStartE2EDuration="1.609367765s" podCreationTimestamp="2026-02-17 20:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:15.608837152 +0000 UTC m=+150.900535463" watchObservedRunningTime="2026-02-17 20:11:15.609367765 +0000 UTC m=+150.901066076" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.624797 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.704731 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xr48d"] Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.705893 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.724719 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr48d"] Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.758365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-utilities\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.758771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twg64\" (UniqueName: \"kubernetes.io/projected/e998269d-55e1-4dd7-b050-08983af268e6-kube-api-access-twg64\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.758815 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-catalog-content\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.860484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-utilities\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.860551 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twg64\" (UniqueName: \"kubernetes.io/projected/e998269d-55e1-4dd7-b050-08983af268e6-kube-api-access-twg64\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.860601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-catalog-content\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.861016 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-utilities\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.861076 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-catalog-content\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.878225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twg64\" (UniqueName: \"kubernetes.io/projected/e998269d-55e1-4dd7-b050-08983af268e6-kube-api-access-twg64\") pod \"redhat-marketplace-xr48d\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.899987 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:15 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:15 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:15 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:15 crc kubenswrapper[4793]: I0217 20:11:15.900041 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.030622 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.062334 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fhps"] Feb 17 20:11:16 crc kubenswrapper[4793]: W0217 20:11:16.083741 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b3a754f_730e_4e58_a6d0_fac36125b7f2.slice/crio-5d554ef1e45dd078fc98ecbf789c6892ad08fde6e1d2bbd6938630f63ee8d5c7 WatchSource:0}: Error finding container 5d554ef1e45dd078fc98ecbf789c6892ad08fde6e1d2bbd6938630f63ee8d5c7: Status 404 returned error can't find the container with id 5d554ef1e45dd078fc98ecbf789c6892ad08fde6e1d2bbd6938630f63ee8d5c7 Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.259771 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr48d"] Feb 17 20:11:16 crc kubenswrapper[4793]: W0217 20:11:16.273851 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode998269d_55e1_4dd7_b050_08983af268e6.slice/crio-34aa63d3270bf9f1de1b0dc9c00170be6af90b542a03c8c797de3011c852758e WatchSource:0}: Error finding container 34aa63d3270bf9f1de1b0dc9c00170be6af90b542a03c8c797de3011c852758e: Status 404 returned error can't find the container with id 34aa63d3270bf9f1de1b0dc9c00170be6af90b542a03c8c797de3011c852758e Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.531643 4793 generic.go:334] "Generic (PLEG): container finished" podID="1610da4d-5427-4047-aaff-2cb9abf1fce6" containerID="859b2b5b26f2e27c4707b55d56ccb1ce5147879aafef2b07c0ab7d9d216c3b0e" exitCode=0 Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.531893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1610da4d-5427-4047-aaff-2cb9abf1fce6","Type":"ContainerDied","Data":"859b2b5b26f2e27c4707b55d56ccb1ce5147879aafef2b07c0ab7d9d216c3b0e"} Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.540430 4793 generic.go:334] "Generic (PLEG): container finished" podID="82552c6f-59af-4d20-97ca-82384997434e" containerID="2d5c8abc676df6c99749c8d4d08bb36673b9ee955a6d1ad588580b8068f05b80" exitCode=0 Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.540503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" event={"ID":"82552c6f-59af-4d20-97ca-82384997434e","Type":"ContainerDied","Data":"2d5c8abc676df6c99749c8d4d08bb36673b9ee955a6d1ad588580b8068f05b80"} Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.548164 4793 generic.go:334] "Generic (PLEG): container finished" podID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerID="2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76" exitCode=0 Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.548271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fhps" event={"ID":"7b3a754f-730e-4e58-a6d0-fac36125b7f2","Type":"ContainerDied","Data":"2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76"} Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.548303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fhps" event={"ID":"7b3a754f-730e-4e58-a6d0-fac36125b7f2","Type":"ContainerStarted","Data":"5d554ef1e45dd078fc98ecbf789c6892ad08fde6e1d2bbd6938630f63ee8d5c7"} Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.567512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerStarted","Data":"34aa63d3270bf9f1de1b0dc9c00170be6af90b542a03c8c797de3011c852758e"} Feb 17 20:11:16 crc kubenswrapper[4793]: E0217 20:11:16.650131 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode998269d_55e1_4dd7_b050_08983af268e6.slice/crio-4c5727744dd6df074b2f021a1c72252f1f4d3cc7ae3c6e3e60af2b7714f7a3f7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.706502 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4x565"] Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.708478 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.711591 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.724016 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4x565"] Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.778785 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-catalog-content\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.778856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szq5j\" (UniqueName: \"kubernetes.io/projected/19a7ae66-8a05-4413-8085-8455f146e98c-kube-api-access-szq5j\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.778892 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-utilities\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.887027 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szq5j\" (UniqueName: \"kubernetes.io/projected/19a7ae66-8a05-4413-8085-8455f146e98c-kube-api-access-szq5j\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.887083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-utilities\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.887124 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-catalog-content\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.887565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-catalog-content\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.888057 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-utilities\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.900127 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:16 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:16 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:16 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.900365 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:16 crc kubenswrapper[4793]: I0217 20:11:16.905839 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szq5j\" (UniqueName: \"kubernetes.io/projected/19a7ae66-8a05-4413-8085-8455f146e98c-kube-api-access-szq5j\") pod \"redhat-operators-4x565\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.030831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.115632 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gm5th"] Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.117831 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.117943 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.117982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.120025 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gm5th"] Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.125816 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.191147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-utilities\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.191352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-catalog-content\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.191410 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wtm9\" (UniqueName: \"kubernetes.io/projected/e0eded1f-cb89-4a85-9e58-59ccef72584b-kube-api-access-7wtm9\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.279611 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.279661 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.281387 4793 patch_prober.go:28] interesting pod/console-f9d7485db-br8vj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.281456 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-br8vj" podUID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.292418 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-catalog-content\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.292460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wtm9\" (UniqueName: \"kubernetes.io/projected/e0eded1f-cb89-4a85-9e58-59ccef72584b-kube-api-access-7wtm9\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.292510 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-utilities\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.293024 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.293140 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.293311 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.293333 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.293755 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-utilities\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.294112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-catalog-content\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.309576 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wtm9\" (UniqueName: \"kubernetes.io/projected/e0eded1f-cb89-4a85-9e58-59ccef72584b-kube-api-access-7wtm9\") pod \"redhat-operators-gm5th\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.386023 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4x565"] Feb 17 20:11:17 crc kubenswrapper[4793]: W0217 20:11:17.395408 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19a7ae66_8a05_4413_8085_8455f146e98c.slice/crio-1cde82889c8aa70d6bed748c312fc57e9cb2f18f83560014e3ba80a0b3dd1aff WatchSource:0}: Error finding container 1cde82889c8aa70d6bed748c312fc57e9cb2f18f83560014e3ba80a0b3dd1aff: Status 404 returned error can't find the container with id 1cde82889c8aa70d6bed748c312fc57e9cb2f18f83560014e3ba80a0b3dd1aff Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.478866 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.579268 4793 generic.go:334] "Generic (PLEG): container finished" podID="e998269d-55e1-4dd7-b050-08983af268e6" containerID="4c5727744dd6df074b2f021a1c72252f1f4d3cc7ae3c6e3e60af2b7714f7a3f7" exitCode=0 Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.579344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerDied","Data":"4c5727744dd6df074b2f021a1c72252f1f4d3cc7ae3c6e3e60af2b7714f7a3f7"} Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.581277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerStarted","Data":"1cde82889c8aa70d6bed748c312fc57e9cb2f18f83560014e3ba80a0b3dd1aff"} Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.592026 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wr4p8" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.880290 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.905150 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gm5th"] Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.909259 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.911280 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:17 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:17 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:17 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.911326 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:17 crc kubenswrapper[4793]: W0217 20:11:17.932732 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0eded1f_cb89_4a85_9e58_59ccef72584b.slice/crio-32abd46e00e3636a11d5e2dba71a035aad090991a39f14576daddbf8ac74964d WatchSource:0}: Error finding container 32abd46e00e3636a11d5e2dba71a035aad090991a39f14576daddbf8ac74964d: Status 404 returned error can't find the container with id 32abd46e00e3636a11d5e2dba71a035aad090991a39f14576daddbf8ac74964d Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.951828 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:17 crc kubenswrapper[4793]: I0217 20:11:17.967396 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.113865 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82552c6f-59af-4d20-97ca-82384997434e-secret-volume\") pod \"82552c6f-59af-4d20-97ca-82384997434e\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.113911 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1610da4d-5427-4047-aaff-2cb9abf1fce6-kubelet-dir\") pod \"1610da4d-5427-4047-aaff-2cb9abf1fce6\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.113967 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82552c6f-59af-4d20-97ca-82384997434e-config-volume\") pod \"82552c6f-59af-4d20-97ca-82384997434e\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.114051 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1610da4d-5427-4047-aaff-2cb9abf1fce6-kube-api-access\") pod \"1610da4d-5427-4047-aaff-2cb9abf1fce6\" (UID: \"1610da4d-5427-4047-aaff-2cb9abf1fce6\") " Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.114097 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jg99\" (UniqueName: \"kubernetes.io/projected/82552c6f-59af-4d20-97ca-82384997434e-kube-api-access-5jg99\") pod \"82552c6f-59af-4d20-97ca-82384997434e\" (UID: \"82552c6f-59af-4d20-97ca-82384997434e\") " Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.116772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1610da4d-5427-4047-aaff-2cb9abf1fce6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1610da4d-5427-4047-aaff-2cb9abf1fce6" (UID: "1610da4d-5427-4047-aaff-2cb9abf1fce6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.117752 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82552c6f-59af-4d20-97ca-82384997434e-config-volume" (OuterVolumeSpecName: "config-volume") pod "82552c6f-59af-4d20-97ca-82384997434e" (UID: "82552c6f-59af-4d20-97ca-82384997434e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.120560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1610da4d-5427-4047-aaff-2cb9abf1fce6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1610da4d-5427-4047-aaff-2cb9abf1fce6" (UID: "1610da4d-5427-4047-aaff-2cb9abf1fce6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.120617 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82552c6f-59af-4d20-97ca-82384997434e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82552c6f-59af-4d20-97ca-82384997434e" (UID: "82552c6f-59af-4d20-97ca-82384997434e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.120843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82552c6f-59af-4d20-97ca-82384997434e-kube-api-access-5jg99" (OuterVolumeSpecName: "kube-api-access-5jg99") pod "82552c6f-59af-4d20-97ca-82384997434e" (UID: "82552c6f-59af-4d20-97ca-82384997434e"). InnerVolumeSpecName "kube-api-access-5jg99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.216100 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82552c6f-59af-4d20-97ca-82384997434e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.216131 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1610da4d-5427-4047-aaff-2cb9abf1fce6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.216140 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82552c6f-59af-4d20-97ca-82384997434e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.216150 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1610da4d-5427-4047-aaff-2cb9abf1fce6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.216160 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jg99\" (UniqueName: \"kubernetes.io/projected/82552c6f-59af-4d20-97ca-82384997434e-kube-api-access-5jg99\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.592515 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.592555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn" event={"ID":"82552c6f-59af-4d20-97ca-82384997434e","Type":"ContainerDied","Data":"2520b1109be9686aed2edb884174298cc4388fcaa1884fb3cb47651c2b8508c7"} Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.593046 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2520b1109be9686aed2edb884174298cc4388fcaa1884fb3cb47651c2b8508c7" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.595983 4793 generic.go:334] "Generic (PLEG): container finished" podID="19a7ae66-8a05-4413-8085-8455f146e98c" containerID="40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802" exitCode=0 Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.596034 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerDied","Data":"40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802"} Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.603580 4793 generic.go:334] "Generic (PLEG): container finished" podID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerID="fe1bc3f46dbe0ea81ba34879db085393120eed3c666e25b8f4eb67bb637de560" exitCode=0 Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.604201 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm5th" event={"ID":"e0eded1f-cb89-4a85-9e58-59ccef72584b","Type":"ContainerDied","Data":"fe1bc3f46dbe0ea81ba34879db085393120eed3c666e25b8f4eb67bb637de560"} Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.604248 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm5th" event={"ID":"e0eded1f-cb89-4a85-9e58-59ccef72584b","Type":"ContainerStarted","Data":"32abd46e00e3636a11d5e2dba71a035aad090991a39f14576daddbf8ac74964d"} Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.631827 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1610da4d-5427-4047-aaff-2cb9abf1fce6","Type":"ContainerDied","Data":"9cc0529fb3c945e892eb1c5f1930baba30d77796027f424cf2b327df8515afd8"} Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.631864 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.631874 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc0529fb3c945e892eb1c5f1930baba30d77796027f424cf2b327df8515afd8" Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.900019 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:18 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:18 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:18 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:18 crc kubenswrapper[4793]: I0217 20:11:18.900075 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.166851 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 20:11:19 crc kubenswrapper[4793]: E0217 20:11:19.167049 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82552c6f-59af-4d20-97ca-82384997434e" containerName="collect-profiles" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.167060 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="82552c6f-59af-4d20-97ca-82384997434e" containerName="collect-profiles" Feb 17 20:11:19 crc kubenswrapper[4793]: E0217 20:11:19.167074 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1610da4d-5427-4047-aaff-2cb9abf1fce6" containerName="pruner" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.167081 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1610da4d-5427-4047-aaff-2cb9abf1fce6" containerName="pruner" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.167191 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1610da4d-5427-4047-aaff-2cb9abf1fce6" containerName="pruner" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.167204 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="82552c6f-59af-4d20-97ca-82384997434e" containerName="collect-profiles" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.167663 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.171205 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.180660 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.195835 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.331762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/476577eb-59cc-4f57-96b4-ac82af3e9c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.331854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/476577eb-59cc-4f57-96b4-ac82af3e9c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.433267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/476577eb-59cc-4f57-96b4-ac82af3e9c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.433309 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/476577eb-59cc-4f57-96b4-ac82af3e9c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.434170 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/476577eb-59cc-4f57-96b4-ac82af3e9c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.459592 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/476577eb-59cc-4f57-96b4-ac82af3e9c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.512167 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.900425 4793 patch_prober.go:28] interesting pod/router-default-5444994796-qcpml container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 20:11:19 crc kubenswrapper[4793]: [-]has-synced failed: reason withheld Feb 17 20:11:19 crc kubenswrapper[4793]: [+]process-running ok Feb 17 20:11:19 crc kubenswrapper[4793]: healthz check failed Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.900651 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qcpml" podUID="32dbc133-34ed-449e-a397-ff3f0b83418c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:11:19 crc kubenswrapper[4793]: I0217 20:11:19.994571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 20:11:20 crc kubenswrapper[4793]: W0217 20:11:20.030552 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod476577eb_59cc_4f57_96b4_ac82af3e9c93.slice/crio-26790db87f414fb462b7e12a60106ccfd671bd1ee279036d13eeb54b39d05bee WatchSource:0}: Error finding container 26790db87f414fb462b7e12a60106ccfd671bd1ee279036d13eeb54b39d05bee: Status 404 returned error can't find the container with id 26790db87f414fb462b7e12a60106ccfd671bd1ee279036d13eeb54b39d05bee Feb 17 20:11:20 crc kubenswrapper[4793]: I0217 20:11:20.102416 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:11:20 crc kubenswrapper[4793]: I0217 20:11:20.102467 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:11:20 crc kubenswrapper[4793]: I0217 20:11:20.652575 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"476577eb-59cc-4f57-96b4-ac82af3e9c93","Type":"ContainerStarted","Data":"26790db87f414fb462b7e12a60106ccfd671bd1ee279036d13eeb54b39d05bee"} Feb 17 20:11:20 crc kubenswrapper[4793]: I0217 20:11:20.903865 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:20 crc kubenswrapper[4793]: I0217 20:11:20.913147 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qcpml" Feb 17 20:11:21 crc kubenswrapper[4793]: I0217 20:11:21.702317 4793 generic.go:334] "Generic (PLEG): container finished" podID="476577eb-59cc-4f57-96b4-ac82af3e9c93" containerID="c980899097cc45f06520c3b4265ee31eecc12652960fd64813f613d3ee64579c" exitCode=0 Feb 17 20:11:21 crc kubenswrapper[4793]: I0217 20:11:21.702429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"476577eb-59cc-4f57-96b4-ac82af3e9c93","Type":"ContainerDied","Data":"c980899097cc45f06520c3b4265ee31eecc12652960fd64813f613d3ee64579c"} Feb 17 20:11:22 crc kubenswrapper[4793]: I0217 20:11:22.941003 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qphb2" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.291745 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.292318 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.291842 4793 patch_prober.go:28] interesting pod/downloads-7954f5f757-knrjc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.292421 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-knrjc" podUID="0ccc2b73-aefc-46a4-b1bd-bf0b09d425d8" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.319077 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.326231 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.750796 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.758847 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/476577eb-59cc-4f57-96b4-ac82af3e9c93-kube-api-access\") pod \"476577eb-59cc-4f57-96b4-ac82af3e9c93\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.758910 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/476577eb-59cc-4f57-96b4-ac82af3e9c93-kubelet-dir\") pod \"476577eb-59cc-4f57-96b4-ac82af3e9c93\" (UID: \"476577eb-59cc-4f57-96b4-ac82af3e9c93\") " Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.759027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/476577eb-59cc-4f57-96b4-ac82af3e9c93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "476577eb-59cc-4f57-96b4-ac82af3e9c93" (UID: "476577eb-59cc-4f57-96b4-ac82af3e9c93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.764012 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.764231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"476577eb-59cc-4f57-96b4-ac82af3e9c93","Type":"ContainerDied","Data":"26790db87f414fb462b7e12a60106ccfd671bd1ee279036d13eeb54b39d05bee"} Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.764260 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26790db87f414fb462b7e12a60106ccfd671bd1ee279036d13eeb54b39d05bee" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.769043 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476577eb-59cc-4f57-96b4-ac82af3e9c93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "476577eb-59cc-4f57-96b4-ac82af3e9c93" (UID: "476577eb-59cc-4f57-96b4-ac82af3e9c93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.859717 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/476577eb-59cc-4f57-96b4-ac82af3e9c93-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:27 crc kubenswrapper[4793]: I0217 20:11:27.859765 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/476577eb-59cc-4f57-96b4-ac82af3e9c93-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:29 crc kubenswrapper[4793]: I0217 20:11:29.871329 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qgxk6"] Feb 17 20:11:29 crc kubenswrapper[4793]: I0217 20:11:29.871908 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerName="controller-manager" containerID="cri-o://a4b70aee74d002df238ba0a98898de73cb1ca95826116f3daf08307246a4f2e9" gracePeriod=30 Feb 17 20:11:29 crc kubenswrapper[4793]: I0217 20:11:29.904166 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4"] Feb 17 20:11:29 crc kubenswrapper[4793]: I0217 20:11:29.904443 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" podUID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" containerName="route-controller-manager" containerID="cri-o://6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0" gracePeriod=30 Feb 17 20:11:31 crc kubenswrapper[4793]: I0217 20:11:31.310263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:11:31 crc kubenswrapper[4793]: I0217 20:11:31.318681 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd-metrics-certs\") pod \"network-metrics-daemon-6trvs\" (UID: \"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd\") " pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:11:31 crc kubenswrapper[4793]: I0217 20:11:31.485434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6trvs" Feb 17 20:11:33 crc kubenswrapper[4793]: I0217 20:11:33.998184 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.816670 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.858758 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl"] Feb 17 20:11:36 crc kubenswrapper[4793]: E0217 20:11:36.859019 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476577eb-59cc-4f57-96b4-ac82af3e9c93" containerName="pruner" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.859033 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="476577eb-59cc-4f57-96b4-ac82af3e9c93" containerName="pruner" Feb 17 20:11:36 crc kubenswrapper[4793]: E0217 20:11:36.859048 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" containerName="route-controller-manager" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.859058 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" containerName="route-controller-manager" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.859171 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" containerName="route-controller-manager" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.859190 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="476577eb-59cc-4f57-96b4-ac82af3e9c93" containerName="pruner" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.859568 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.866431 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl"] Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.987522 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-serving-cert\") pod \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.987903 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-client-ca\") pod \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.988320 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bbcc262-c708-4bd4-81c3-7bcbb485ddad" (UID: "9bbcc262-c708-4bd4-81c3-7bcbb485ddad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.988382 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6pzs\" (UniqueName: \"kubernetes.io/projected/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-kube-api-access-t6pzs\") pod \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.988406 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-config\") pod \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\" (UID: \"9bbcc262-c708-4bd4-81c3-7bcbb485ddad\") " Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.989169 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxp4n\" (UniqueName: \"kubernetes.io/projected/21356257-3793-4104-9495-28721c4d51d9-kube-api-access-jxp4n\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.989336 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21356257-3793-4104-9495-28721c4d51d9-serving-cert\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.989517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-client-ca\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.989668 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-config\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.989231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-config" (OuterVolumeSpecName: "config") pod "9bbcc262-c708-4bd4-81c3-7bcbb485ddad" (UID: "9bbcc262-c708-4bd4-81c3-7bcbb485ddad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.989966 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:36 crc kubenswrapper[4793]: I0217 20:11:36.995460 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-kube-api-access-t6pzs" (OuterVolumeSpecName: "kube-api-access-t6pzs") pod "9bbcc262-c708-4bd4-81c3-7bcbb485ddad" (UID: "9bbcc262-c708-4bd4-81c3-7bcbb485ddad"). InnerVolumeSpecName "kube-api-access-t6pzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.000271 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bbcc262-c708-4bd4-81c3-7bcbb485ddad" (UID: "9bbcc262-c708-4bd4-81c3-7bcbb485ddad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.091848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxp4n\" (UniqueName: \"kubernetes.io/projected/21356257-3793-4104-9495-28721c4d51d9-kube-api-access-jxp4n\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.091925 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21356257-3793-4104-9495-28721c4d51d9-serving-cert\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.092006 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-client-ca\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.092071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-config\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.092144 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.092166 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6pzs\" (UniqueName: \"kubernetes.io/projected/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-kube-api-access-t6pzs\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.092186 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bbcc262-c708-4bd4-81c3-7bcbb485ddad-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.094334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-client-ca\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.095438 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-config\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.096638 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21356257-3793-4104-9495-28721c4d51d9-serving-cert\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.107475 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxp4n\" (UniqueName: \"kubernetes.io/projected/21356257-3793-4104-9495-28721c4d51d9-kube-api-access-jxp4n\") pod \"route-controller-manager-76b8798656-vk5gl\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.181806 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.297599 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-knrjc" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.500591 4793 generic.go:334] "Generic (PLEG): container finished" podID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerID="a4b70aee74d002df238ba0a98898de73cb1ca95826116f3daf08307246a4f2e9" exitCode=0 Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.500720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" event={"ID":"2f13644d-b44d-450c-ac22-88e8a8c6e41d","Type":"ContainerDied","Data":"a4b70aee74d002df238ba0a98898de73cb1ca95826116f3daf08307246a4f2e9"} Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.502982 4793 generic.go:334] "Generic (PLEG): container finished" podID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" containerID="6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0" exitCode=0 Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.503012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" event={"ID":"9bbcc262-c708-4bd4-81c3-7bcbb485ddad","Type":"ContainerDied","Data":"6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0"} Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.503040 4793 scope.go:117] "RemoveContainer" containerID="6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.503084 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4" Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.530998 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4"] Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.533652 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qpst4"] Feb 17 20:11:37 crc kubenswrapper[4793]: I0217 20:11:37.545367 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bbcc262-c708-4bd4-81c3-7bcbb485ddad" path="/var/lib/kubelet/pods/9bbcc262-c708-4bd4-81c3-7bcbb485ddad/volumes" Feb 17 20:11:38 crc kubenswrapper[4793]: I0217 20:11:38.348900 4793 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qgxk6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: i/o timeout" start-of-body= Feb 17 20:11:38 crc kubenswrapper[4793]: I0217 20:11:38.348959 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: i/o timeout" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.807632 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.836834 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f89d69db4-rfszz"] Feb 17 20:11:42 crc kubenswrapper[4793]: E0217 20:11:42.837329 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerName="controller-manager" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.838481 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerName="controller-manager" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.838988 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" containerName="controller-manager" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.839535 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.848763 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f89d69db4-rfszz"] Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.864348 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-client-ca\") pod \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.864404 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d6g5\" (UniqueName: \"kubernetes.io/projected/2f13644d-b44d-450c-ac22-88e8a8c6e41d-kube-api-access-9d6g5\") pod \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.864519 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-config\") pod \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.864541 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-proxy-ca-bundles\") pod \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.864572 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f13644d-b44d-450c-ac22-88e8a8c6e41d-serving-cert\") pod \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\" (UID: \"2f13644d-b44d-450c-ac22-88e8a8c6e41d\") " Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.865498 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2f13644d-b44d-450c-ac22-88e8a8c6e41d" (UID: "2f13644d-b44d-450c-ac22-88e8a8c6e41d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.865534 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-config" (OuterVolumeSpecName: "config") pod "2f13644d-b44d-450c-ac22-88e8a8c6e41d" (UID: "2f13644d-b44d-450c-ac22-88e8a8c6e41d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.865513 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2f13644d-b44d-450c-ac22-88e8a8c6e41d" (UID: "2f13644d-b44d-450c-ac22-88e8a8c6e41d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.871192 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f13644d-b44d-450c-ac22-88e8a8c6e41d-kube-api-access-9d6g5" (OuterVolumeSpecName: "kube-api-access-9d6g5") pod "2f13644d-b44d-450c-ac22-88e8a8c6e41d" (UID: "2f13644d-b44d-450c-ac22-88e8a8c6e41d"). InnerVolumeSpecName "kube-api-access-9d6g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.871407 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f13644d-b44d-450c-ac22-88e8a8c6e41d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2f13644d-b44d-450c-ac22-88e8a8c6e41d" (UID: "2f13644d-b44d-450c-ac22-88e8a8c6e41d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966227 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/357049a8-046f-4159-80e4-e2865eec6f24-serving-cert\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966340 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8l7k\" (UniqueName: \"kubernetes.io/projected/357049a8-046f-4159-80e4-e2865eec6f24-kube-api-access-j8l7k\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966374 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-client-ca\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966420 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-proxy-ca-bundles\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-config\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966538 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966553 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966584 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f13644d-b44d-450c-ac22-88e8a8c6e41d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966594 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f13644d-b44d-450c-ac22-88e8a8c6e41d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:42 crc kubenswrapper[4793]: I0217 20:11:42.966606 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d6g5\" (UniqueName: \"kubernetes.io/projected/2f13644d-b44d-450c-ac22-88e8a8c6e41d-kube-api-access-9d6g5\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.067574 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/357049a8-046f-4159-80e4-e2865eec6f24-serving-cert\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.067626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8l7k\" (UniqueName: \"kubernetes.io/projected/357049a8-046f-4159-80e4-e2865eec6f24-kube-api-access-j8l7k\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.067651 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-client-ca\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.067673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-proxy-ca-bundles\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.067703 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-config\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.068578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-client-ca\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.068804 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-proxy-ca-bundles\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.068940 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-config\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.072214 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/357049a8-046f-4159-80e4-e2865eec6f24-serving-cert\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.081964 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8l7k\" (UniqueName: \"kubernetes.io/projected/357049a8-046f-4159-80e4-e2865eec6f24-kube-api-access-j8l7k\") pod \"controller-manager-5f89d69db4-rfszz\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.161846 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.534191 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" event={"ID":"2f13644d-b44d-450c-ac22-88e8a8c6e41d","Type":"ContainerDied","Data":"4dee58ec3405b6acfba1cc4a54215ebea60d030c95645c5c5798a556bb5b96c2"} Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.534251 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qgxk6" Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.561722 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qgxk6"] Feb 17 20:11:43 crc kubenswrapper[4793]: I0217 20:11:43.567392 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qgxk6"] Feb 17 20:11:45 crc kubenswrapper[4793]: I0217 20:11:45.544173 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f13644d-b44d-450c-ac22-88e8a8c6e41d" path="/var/lib/kubelet/pods/2f13644d-b44d-450c-ac22-88e8a8c6e41d/volumes" Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.120563 4793 scope.go:117] "RemoveContainer" containerID="6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0" Feb 17 20:11:46 crc kubenswrapper[4793]: E0217 20:11:46.121627 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0\": container with ID starting with 6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0 not found: ID does not exist" containerID="6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0" Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.121658 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0"} err="failed to get container status \"6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0\": rpc error: code = NotFound desc = could not find container \"6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0\": container with ID starting with 6c30e3acf4a473e6a9adcf8bfbc5c53e17583f9b9ca99fcf94d55abe29c898d0 not found: ID does not exist" Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.121712 4793 scope.go:117] "RemoveContainer" containerID="a4b70aee74d002df238ba0a98898de73cb1ca95826116f3daf08307246a4f2e9" Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.490591 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6trvs"] Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.557891 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerStarted","Data":"078885407b6b5044724031909b8d53b0a1ba1aafde3942ae9cb81559d5ba9b18"} Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.562902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerStarted","Data":"0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8"} Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.564593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerStarted","Data":"f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909"} Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.589550 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerStarted","Data":"0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7"} Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.591372 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerStarted","Data":"8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9"} Feb 17 20:11:46 crc kubenswrapper[4793]: W0217 20:11:46.609270 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b10b2a4_49e2_47bf_8de8_f5228bfd3ffd.slice/crio-dd8ca2356b83b3bf694b9d6bc04dc387b3b05095181631df50f43e3aa1292a98 WatchSource:0}: Error finding container dd8ca2356b83b3bf694b9d6bc04dc387b3b05095181631df50f43e3aa1292a98: Status 404 returned error can't find the container with id dd8ca2356b83b3bf694b9d6bc04dc387b3b05095181631df50f43e3aa1292a98 Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.746476 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl"] Feb 17 20:11:46 crc kubenswrapper[4793]: I0217 20:11:46.887511 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f89d69db4-rfszz"] Feb 17 20:11:47 crc kubenswrapper[4793]: W0217 20:11:47.003155 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357049a8_046f_4159_80e4_e2865eec6f24.slice/crio-0254e547e8e832a0344e429d708842633ed16cbc06e28636c2f9ec62e69cc71d WatchSource:0}: Error finding container 0254e547e8e832a0344e429d708842633ed16cbc06e28636c2f9ec62e69cc71d: Status 404 returned error can't find the container with id 0254e547e8e832a0344e429d708842633ed16cbc06e28636c2f9ec62e69cc71d Feb 17 20:11:47 crc kubenswrapper[4793]: E0217 20:11:47.057416 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aafddcd_6479_40b9_95a0_fa07713b5068.slice/crio-a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.599207 4793 generic.go:334] "Generic (PLEG): container finished" podID="a94875ee-0253-456e-a8c8-68be4676bb88" containerID="0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.599286 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerDied","Data":"0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.600326 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6trvs" event={"ID":"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd","Type":"ContainerStarted","Data":"dd8ca2356b83b3bf694b9d6bc04dc387b3b05095181631df50f43e3aa1292a98"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.604757 4793 generic.go:334] "Generic (PLEG): container finished" podID="e998269d-55e1-4dd7-b050-08983af268e6" containerID="078885407b6b5044724031909b8d53b0a1ba1aafde3942ae9cb81559d5ba9b18" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.604824 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerDied","Data":"078885407b6b5044724031909b8d53b0a1ba1aafde3942ae9cb81559d5ba9b18"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.607901 4793 generic.go:334] "Generic (PLEG): container finished" podID="19a7ae66-8a05-4413-8085-8455f146e98c" containerID="0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.607970 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerDied","Data":"0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.610478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" event={"ID":"357049a8-046f-4159-80e4-e2865eec6f24","Type":"ContainerStarted","Data":"0254e547e8e832a0344e429d708842633ed16cbc06e28636c2f9ec62e69cc71d"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.612230 4793 generic.go:334] "Generic (PLEG): container finished" podID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerID="d7230bc7aadfbf84fe7faeee0589d0dda3beb7e449a4cea09e883c57c216ce0c" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.612271 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm5th" event={"ID":"e0eded1f-cb89-4a85-9e58-59ccef72584b","Type":"ContainerDied","Data":"d7230bc7aadfbf84fe7faeee0589d0dda3beb7e449a4cea09e883c57c216ce0c"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.620963 4793 generic.go:334] "Generic (PLEG): container finished" podID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerID="a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.621071 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfdp2" event={"ID":"6aafddcd-6479-40b9-95a0-fa07713b5068","Type":"ContainerDied","Data":"a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.625599 4793 generic.go:334] "Generic (PLEG): container finished" podID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerID="8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.625659 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerDied","Data":"8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.628768 4793 generic.go:334] "Generic (PLEG): container finished" podID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerID="ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.628809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fhps" event={"ID":"7b3a754f-730e-4e58-a6d0-fac36125b7f2","Type":"ContainerDied","Data":"ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.631447 4793 generic.go:334] "Generic (PLEG): container finished" podID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerID="f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909" exitCode=0 Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.631507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerDied","Data":"f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909"} Feb 17 20:11:47 crc kubenswrapper[4793]: I0217 20:11:47.634486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" event={"ID":"21356257-3793-4104-9495-28721c4d51d9","Type":"ContainerStarted","Data":"0a3e8ef83e896d3e7b4317476a77dde15d320de405c11c6d2c47efd4c7578d7b"} Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.145435 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m4r4g" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.642109 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" event={"ID":"357049a8-046f-4159-80e4-e2865eec6f24","Type":"ContainerStarted","Data":"c75aace190091088ae24598b250805d7cbba54886a3d36456a50f78d1ea0d6c0"} Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.642468 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.643467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" event={"ID":"21356257-3793-4104-9495-28721c4d51d9","Type":"ContainerStarted","Data":"ea272302eb1a8e2fb262f9425ca3f3cb686541c18d19255ed6e9d9875c1f13fa"} Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.643763 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.645818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6trvs" event={"ID":"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd","Type":"ContainerStarted","Data":"aa6ed6aa4cb250a0edea95c3cc18e58d2baecf8ce440372c161a348e954a6993"} Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.645857 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6trvs" event={"ID":"0b10b2a4-49e2-47bf-8de8-f5228bfd3ffd","Type":"ContainerStarted","Data":"cd7727b2b5d14d64cc981ce43d05f27134ec357f8f8a90c8e7af26bebb8bfdde"} Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.647235 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.650113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.660745 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" podStartSLOduration=19.660730463 podStartE2EDuration="19.660730463s" podCreationTimestamp="2026-02-17 20:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:48.658395537 +0000 UTC m=+183.950093888" watchObservedRunningTime="2026-02-17 20:11:48.660730463 +0000 UTC m=+183.952428774" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.677682 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6trvs" podStartSLOduration=160.67765709 podStartE2EDuration="2m40.67765709s" podCreationTimestamp="2026-02-17 20:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:48.671715567 +0000 UTC m=+183.963413878" watchObservedRunningTime="2026-02-17 20:11:48.67765709 +0000 UTC m=+183.969355431" Feb 17 20:11:48 crc kubenswrapper[4793]: I0217 20:11:48.713528 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" podStartSLOduration=19.713507551 podStartE2EDuration="19.713507551s" podCreationTimestamp="2026-02-17 20:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:48.713195244 +0000 UTC m=+184.004893565" watchObservedRunningTime="2026-02-17 20:11:48.713507551 +0000 UTC m=+184.005205872" Feb 17 20:11:49 crc kubenswrapper[4793]: I0217 20:11:49.664305 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f89d69db4-rfszz"] Feb 17 20:11:49 crc kubenswrapper[4793]: I0217 20:11:49.669379 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl"] Feb 17 20:11:50 crc kubenswrapper[4793]: I0217 20:11:50.101825 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:11:50 crc kubenswrapper[4793]: I0217 20:11:50.101886 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:11:50 crc kubenswrapper[4793]: I0217 20:11:50.685496 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerStarted","Data":"8a0a9cb088cf4c61e9b69b964821129ec73181ea80de2012323d4f3a7152a324"} Feb 17 20:11:50 crc kubenswrapper[4793]: I0217 20:11:50.701490 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xr48d" podStartSLOduration=3.180499999 podStartE2EDuration="35.701472096s" podCreationTimestamp="2026-02-17 20:11:15 +0000 UTC" firstStartedPulling="2026-02-17 20:11:17.591152951 +0000 UTC m=+152.882851262" lastFinishedPulling="2026-02-17 20:11:50.112125048 +0000 UTC m=+185.403823359" observedRunningTime="2026-02-17 20:11:50.699294554 +0000 UTC m=+185.990992885" watchObservedRunningTime="2026-02-17 20:11:50.701472096 +0000 UTC m=+185.993170407" Feb 17 20:11:51 crc kubenswrapper[4793]: I0217 20:11:51.691055 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" podUID="357049a8-046f-4159-80e4-e2865eec6f24" containerName="controller-manager" containerID="cri-o://c75aace190091088ae24598b250805d7cbba54886a3d36456a50f78d1ea0d6c0" gracePeriod=30 Feb 17 20:11:51 crc kubenswrapper[4793]: I0217 20:11:51.691673 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" podUID="21356257-3793-4104-9495-28721c4d51d9" containerName="route-controller-manager" containerID="cri-o://ea272302eb1a8e2fb262f9425ca3f3cb686541c18d19255ed6e9d9875c1f13fa" gracePeriod=30 Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.705094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerStarted","Data":"4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06"} Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.709280 4793 generic.go:334] "Generic (PLEG): container finished" podID="357049a8-046f-4159-80e4-e2865eec6f24" containerID="c75aace190091088ae24598b250805d7cbba54886a3d36456a50f78d1ea0d6c0" exitCode=0 Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.709352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" event={"ID":"357049a8-046f-4159-80e4-e2865eec6f24","Type":"ContainerDied","Data":"c75aace190091088ae24598b250805d7cbba54886a3d36456a50f78d1ea0d6c0"} Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.724530 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4x565" podStartSLOduration=3.451702347 podStartE2EDuration="36.724510335s" podCreationTimestamp="2026-02-17 20:11:16 +0000 UTC" firstStartedPulling="2026-02-17 20:11:18.597587198 +0000 UTC m=+153.889285509" lastFinishedPulling="2026-02-17 20:11:51.870395176 +0000 UTC m=+187.162093497" observedRunningTime="2026-02-17 20:11:52.722166668 +0000 UTC m=+188.013864979" watchObservedRunningTime="2026-02-17 20:11:52.724510335 +0000 UTC m=+188.016208656" Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.724589 4793 generic.go:334] "Generic (PLEG): container finished" podID="21356257-3793-4104-9495-28721c4d51d9" containerID="ea272302eb1a8e2fb262f9425ca3f3cb686541c18d19255ed6e9d9875c1f13fa" exitCode=0 Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.724651 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" event={"ID":"21356257-3793-4104-9495-28721c4d51d9","Type":"ContainerDied","Data":"ea272302eb1a8e2fb262f9425ca3f3cb686541c18d19255ed6e9d9875c1f13fa"} Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.841198 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.870428 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck"] Feb 17 20:11:52 crc kubenswrapper[4793]: E0217 20:11:52.870618 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21356257-3793-4104-9495-28721c4d51d9" containerName="route-controller-manager" Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.870628 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="21356257-3793-4104-9495-28721c4d51d9" containerName="route-controller-manager" Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.870744 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="21356257-3793-4104-9495-28721c4d51d9" containerName="route-controller-manager" Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.871096 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.876491 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck"] Feb 17 20:11:52 crc kubenswrapper[4793]: I0217 20:11:52.933614 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-client-ca\") pod \"357049a8-046f-4159-80e4-e2865eec6f24\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000579 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-client-ca\") pod \"21356257-3793-4104-9495-28721c4d51d9\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-proxy-ca-bundles\") pod \"357049a8-046f-4159-80e4-e2865eec6f24\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxp4n\" (UniqueName: \"kubernetes.io/projected/21356257-3793-4104-9495-28721c4d51d9-kube-api-access-jxp4n\") pod \"21356257-3793-4104-9495-28721c4d51d9\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000651 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8l7k\" (UniqueName: \"kubernetes.io/projected/357049a8-046f-4159-80e4-e2865eec6f24-kube-api-access-j8l7k\") pod \"357049a8-046f-4159-80e4-e2865eec6f24\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-config\") pod \"357049a8-046f-4159-80e4-e2865eec6f24\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000721 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-config\") pod \"21356257-3793-4104-9495-28721c4d51d9\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000754 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21356257-3793-4104-9495-28721c4d51d9-serving-cert\") pod \"21356257-3793-4104-9495-28721c4d51d9\" (UID: \"21356257-3793-4104-9495-28721c4d51d9\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000772 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/357049a8-046f-4159-80e4-e2865eec6f24-serving-cert\") pod \"357049a8-046f-4159-80e4-e2865eec6f24\" (UID: \"357049a8-046f-4159-80e4-e2865eec6f24\") " Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef3e13c-918a-4296-8b80-109a6ead0eea-serving-cert\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6c6m\" (UniqueName: \"kubernetes.io/projected/aef3e13c-918a-4296-8b80-109a6ead0eea-kube-api-access-n6c6m\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-client-ca\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.000998 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-config\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.001586 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-client-ca" (OuterVolumeSpecName: "client-ca") pod "357049a8-046f-4159-80e4-e2865eec6f24" (UID: "357049a8-046f-4159-80e4-e2865eec6f24"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.001876 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-client-ca" (OuterVolumeSpecName: "client-ca") pod "21356257-3793-4104-9495-28721c4d51d9" (UID: "21356257-3793-4104-9495-28721c4d51d9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.002117 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "357049a8-046f-4159-80e4-e2865eec6f24" (UID: "357049a8-046f-4159-80e4-e2865eec6f24"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.006803 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-config" (OuterVolumeSpecName: "config") pod "357049a8-046f-4159-80e4-e2865eec6f24" (UID: "357049a8-046f-4159-80e4-e2865eec6f24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.007402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-config" (OuterVolumeSpecName: "config") pod "21356257-3793-4104-9495-28721c4d51d9" (UID: "21356257-3793-4104-9495-28721c4d51d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.007621 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21356257-3793-4104-9495-28721c4d51d9-kube-api-access-jxp4n" (OuterVolumeSpecName: "kube-api-access-jxp4n") pod "21356257-3793-4104-9495-28721c4d51d9" (UID: "21356257-3793-4104-9495-28721c4d51d9"). InnerVolumeSpecName "kube-api-access-jxp4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.009271 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357049a8-046f-4159-80e4-e2865eec6f24-kube-api-access-j8l7k" (OuterVolumeSpecName: "kube-api-access-j8l7k") pod "357049a8-046f-4159-80e4-e2865eec6f24" (UID: "357049a8-046f-4159-80e4-e2865eec6f24"). InnerVolumeSpecName "kube-api-access-j8l7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.010054 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/357049a8-046f-4159-80e4-e2865eec6f24-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "357049a8-046f-4159-80e4-e2865eec6f24" (UID: "357049a8-046f-4159-80e4-e2865eec6f24"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.011570 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21356257-3793-4104-9495-28721c4d51d9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21356257-3793-4104-9495-28721c4d51d9" (UID: "21356257-3793-4104-9495-28721c4d51d9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.102440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-config\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.102811 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef3e13c-918a-4296-8b80-109a6ead0eea-serving-cert\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.102983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6c6m\" (UniqueName: \"kubernetes.io/projected/aef3e13c-918a-4296-8b80-109a6ead0eea-kube-api-access-n6c6m\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.103073 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-client-ca\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.103213 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.103295 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.103356 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxp4n\" (UniqueName: \"kubernetes.io/projected/21356257-3793-4104-9495-28721c4d51d9-kube-api-access-jxp4n\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.104151 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8l7k\" (UniqueName: \"kubernetes.io/projected/357049a8-046f-4159-80e4-e2865eec6f24-kube-api-access-j8l7k\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.104227 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.104297 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21356257-3793-4104-9495-28721c4d51d9-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.104365 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21356257-3793-4104-9495-28721c4d51d9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.104422 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/357049a8-046f-4159-80e4-e2865eec6f24-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.104475 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/357049a8-046f-4159-80e4-e2865eec6f24-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.125485 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-client-ca\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.125631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-config\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.125824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef3e13c-918a-4296-8b80-109a6ead0eea-serving-cert\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.125854 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6c6m\" (UniqueName: \"kubernetes.io/projected/aef3e13c-918a-4296-8b80-109a6ead0eea-kube-api-access-n6c6m\") pod \"route-controller-manager-6d6988cbfb-h8hck\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.195329 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.731406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" event={"ID":"357049a8-046f-4159-80e4-e2865eec6f24","Type":"ContainerDied","Data":"0254e547e8e832a0344e429d708842633ed16cbc06e28636c2f9ec62e69cc71d"} Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.731415 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f89d69db4-rfszz" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.731866 4793 scope.go:117] "RemoveContainer" containerID="c75aace190091088ae24598b250805d7cbba54886a3d36456a50f78d1ea0d6c0" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.732734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" event={"ID":"21356257-3793-4104-9495-28721c4d51d9","Type":"ContainerDied","Data":"0a3e8ef83e896d3e7b4317476a77dde15d320de405c11c6d2c47efd4c7578d7b"} Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.732870 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl" Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.760177 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f89d69db4-rfszz"] Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.771621 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f89d69db4-rfszz"] Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.776779 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl"] Feb 17 20:11:53 crc kubenswrapper[4793]: I0217 20:11:53.781341 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b8798656-vk5gl"] Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.022018 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.100338 4793 scope.go:117] "RemoveContainer" containerID="ea272302eb1a8e2fb262f9425ca3f3cb686541c18d19255ed6e9d9875c1f13fa" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.926761 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86554fb6bd-cl2hq"] Feb 17 20:11:54 crc kubenswrapper[4793]: E0217 20:11:54.927723 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357049a8-046f-4159-80e4-e2865eec6f24" containerName="controller-manager" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.927735 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="357049a8-046f-4159-80e4-e2865eec6f24" containerName="controller-manager" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.927839 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="357049a8-046f-4159-80e4-e2865eec6f24" containerName="controller-manager" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.928322 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.934178 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.934527 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.934784 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.934834 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.935001 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.945477 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.945639 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.979731 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86554fb6bd-cl2hq"] Feb 17 20:11:54 crc kubenswrapper[4793]: I0217 20:11:54.983471 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck"] Feb 17 20:11:54 crc kubenswrapper[4793]: W0217 20:11:54.988260 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef3e13c_918a_4296_8b80_109a6ead0eea.slice/crio-57f4a2b85237938309b363486d32d9742e2962cad0db24e9d7114e1b128d9057 WatchSource:0}: Error finding container 57f4a2b85237938309b363486d32d9742e2962cad0db24e9d7114e1b128d9057: Status 404 returned error can't find the container with id 57f4a2b85237938309b363486d32d9742e2962cad0db24e9d7114e1b128d9057 Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.124481 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-config\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.124523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r62pv\" (UniqueName: \"kubernetes.io/projected/0edc34f7-8747-4335-8a87-94d585057ac3-kube-api-access-r62pv\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.124544 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-client-ca\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.124567 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-proxy-ca-bundles\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.124724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0edc34f7-8747-4335-8a87-94d585057ac3-serving-cert\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.135587 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.136379 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.139246 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.139332 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.145100 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.225861 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0edc34f7-8747-4335-8a87-94d585057ac3-serving-cert\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.225937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-config\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.225960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r62pv\" (UniqueName: \"kubernetes.io/projected/0edc34f7-8747-4335-8a87-94d585057ac3-kube-api-access-r62pv\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.225976 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-client-ca\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.226012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-proxy-ca-bundles\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.226049 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76c70fa-8f9c-4049-92db-0249b0832537-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.226090 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76c70fa-8f9c-4049-92db-0249b0832537-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.227226 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-client-ca\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.227556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-config\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.227602 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-proxy-ca-bundles\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.231734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0edc34f7-8747-4335-8a87-94d585057ac3-serving-cert\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.244981 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r62pv\" (UniqueName: \"kubernetes.io/projected/0edc34f7-8747-4335-8a87-94d585057ac3-kube-api-access-r62pv\") pod \"controller-manager-86554fb6bd-cl2hq\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.247710 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.327203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76c70fa-8f9c-4049-92db-0249b0832537-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.327464 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76c70fa-8f9c-4049-92db-0249b0832537-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.327558 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76c70fa-8f9c-4049-92db-0249b0832537-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.345740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76c70fa-8f9c-4049-92db-0249b0832537-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.456092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.554949 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21356257-3793-4104-9495-28721c4d51d9" path="/var/lib/kubelet/pods/21356257-3793-4104-9495-28721c4d51d9/volumes" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.555779 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357049a8-046f-4159-80e4-e2865eec6f24" path="/var/lib/kubelet/pods/357049a8-046f-4159-80e4-e2865eec6f24/volumes" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.766125 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfdp2" event={"ID":"6aafddcd-6479-40b9-95a0-fa07713b5068","Type":"ContainerStarted","Data":"3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.768284 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm5th" event={"ID":"e0eded1f-cb89-4a85-9e58-59ccef72584b","Type":"ContainerStarted","Data":"77de170bb0dc19985fbe01d609df7c308129da3e3f7743903c42893a816f7ddf"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.776323 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerStarted","Data":"a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.781994 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerStarted","Data":"254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.783783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" event={"ID":"aef3e13c-918a-4296-8b80-109a6ead0eea","Type":"ContainerStarted","Data":"fbaabfa9de919615a31d854891ee8a9168f956b514173a9088922aa0c5854ca0"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.783838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" event={"ID":"aef3e13c-918a-4296-8b80-109a6ead0eea","Type":"ContainerStarted","Data":"57f4a2b85237938309b363486d32d9742e2962cad0db24e9d7114e1b128d9057"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.783932 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.784431 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86554fb6bd-cl2hq"] Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.785503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fhps" event={"ID":"7b3a754f-730e-4e58-a6d0-fac36125b7f2","Type":"ContainerStarted","Data":"312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.787376 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerStarted","Data":"c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd"} Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.795286 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.797839 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dfdp2" podStartSLOduration=3.641185701 podStartE2EDuration="42.797816002s" podCreationTimestamp="2026-02-17 20:11:13 +0000 UTC" firstStartedPulling="2026-02-17 20:11:15.463102771 +0000 UTC m=+150.754801082" lastFinishedPulling="2026-02-17 20:11:54.619733052 +0000 UTC m=+189.911431383" observedRunningTime="2026-02-17 20:11:55.795543228 +0000 UTC m=+191.087241539" watchObservedRunningTime="2026-02-17 20:11:55.797816002 +0000 UTC m=+191.089514313" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.841229 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cpdmp" podStartSLOduration=3.664306317 podStartE2EDuration="42.841209045s" podCreationTimestamp="2026-02-17 20:11:13 +0000 UTC" firstStartedPulling="2026-02-17 20:11:15.477481716 +0000 UTC m=+150.769180027" lastFinishedPulling="2026-02-17 20:11:54.654384424 +0000 UTC m=+189.946082755" observedRunningTime="2026-02-17 20:11:55.82271885 +0000 UTC m=+191.114417161" watchObservedRunningTime="2026-02-17 20:11:55.841209045 +0000 UTC m=+191.132907356" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.869663 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ckt4v" podStartSLOduration=3.74850206 podStartE2EDuration="42.869641818s" podCreationTimestamp="2026-02-17 20:11:13 +0000 UTC" firstStartedPulling="2026-02-17 20:11:15.479777322 +0000 UTC m=+150.771475633" lastFinishedPulling="2026-02-17 20:11:54.60091707 +0000 UTC m=+189.892615391" observedRunningTime="2026-02-17 20:11:55.853892349 +0000 UTC m=+191.145590660" watchObservedRunningTime="2026-02-17 20:11:55.869641818 +0000 UTC m=+191.161340129" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.895840 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" podStartSLOduration=6.895826657 podStartE2EDuration="6.895826657s" podCreationTimestamp="2026-02-17 20:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:55.87932851 +0000 UTC m=+191.171026821" watchObservedRunningTime="2026-02-17 20:11:55.895826657 +0000 UTC m=+191.187524968" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.896970 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nwsr4" podStartSLOduration=3.698334515 podStartE2EDuration="42.896961164s" podCreationTimestamp="2026-02-17 20:11:13 +0000 UTC" firstStartedPulling="2026-02-17 20:11:15.456424101 +0000 UTC m=+150.748122412" lastFinishedPulling="2026-02-17 20:11:54.65505074 +0000 UTC m=+189.946749061" observedRunningTime="2026-02-17 20:11:55.893924051 +0000 UTC m=+191.185622372" watchObservedRunningTime="2026-02-17 20:11:55.896961164 +0000 UTC m=+191.188659475" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.925429 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8fhps" podStartSLOduration=2.688316286 podStartE2EDuration="40.925404167s" podCreationTimestamp="2026-02-17 20:11:15 +0000 UTC" firstStartedPulling="2026-02-17 20:11:16.553874164 +0000 UTC m=+151.845572475" lastFinishedPulling="2026-02-17 20:11:54.790962045 +0000 UTC m=+190.082660356" observedRunningTime="2026-02-17 20:11:55.91801768 +0000 UTC m=+191.209716001" watchObservedRunningTime="2026-02-17 20:11:55.925404167 +0000 UTC m=+191.217102478" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.944295 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gm5th" podStartSLOduration=3.4538308 podStartE2EDuration="38.944277621s" podCreationTimestamp="2026-02-17 20:11:17 +0000 UTC" firstStartedPulling="2026-02-17 20:11:18.609991046 +0000 UTC m=+153.901689347" lastFinishedPulling="2026-02-17 20:11:54.100437857 +0000 UTC m=+189.392136168" observedRunningTime="2026-02-17 20:11:55.943584504 +0000 UTC m=+191.235282815" watchObservedRunningTime="2026-02-17 20:11:55.944277621 +0000 UTC m=+191.235975932" Feb 17 20:11:55 crc kubenswrapper[4793]: I0217 20:11:55.965839 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.030900 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.032222 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.141896 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bgzw"] Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.443197 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.793196 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" event={"ID":"0edc34f7-8747-4335-8a87-94d585057ac3","Type":"ContainerStarted","Data":"d4194fdc7cbc2dbf44a63c6a046de7dd46551f267e7ad8e06cb4e490bc89e076"} Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.793247 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" event={"ID":"0edc34f7-8747-4335-8a87-94d585057ac3","Type":"ContainerStarted","Data":"4ede745c4f34a21e08e8161ded9f559172e04f1dccc665056fa1bbae8aab0fa0"} Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.794471 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.796243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b76c70fa-8f9c-4049-92db-0249b0832537","Type":"ContainerStarted","Data":"9267d6235910c388affa2597b4e970c74b1ce24af94cb8e66f4146f0d3fda2a9"} Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.796274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b76c70fa-8f9c-4049-92db-0249b0832537","Type":"ContainerStarted","Data":"882785d3a6537ea4ebd22433dda1a7eb9ae8fd3b9fd84e49896dcae3be8d0ed9"} Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.798906 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.817067 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" podStartSLOduration=7.817044636 podStartE2EDuration="7.817044636s" podCreationTimestamp="2026-02-17 20:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:56.81473196 +0000 UTC m=+192.106430271" watchObservedRunningTime="2026-02-17 20:11:56.817044636 +0000 UTC m=+192.108742947" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.855758 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:11:56 crc kubenswrapper[4793]: I0217 20:11:56.875417 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.875394047 podStartE2EDuration="1.875394047s" podCreationTimestamp="2026-02-17 20:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:11:56.862281172 +0000 UTC m=+192.153979493" watchObservedRunningTime="2026-02-17 20:11:56.875394047 +0000 UTC m=+192.167092358" Feb 17 20:11:57 crc kubenswrapper[4793]: I0217 20:11:57.031175 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:57 crc kubenswrapper[4793]: I0217 20:11:57.031438 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:11:57 crc kubenswrapper[4793]: I0217 20:11:57.479905 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:57 crc kubenswrapper[4793]: I0217 20:11:57.480006 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:11:57 crc kubenswrapper[4793]: I0217 20:11:57.803381 4793 generic.go:334] "Generic (PLEG): container finished" podID="b76c70fa-8f9c-4049-92db-0249b0832537" containerID="9267d6235910c388affa2597b4e970c74b1ce24af94cb8e66f4146f0d3fda2a9" exitCode=0 Feb 17 20:11:57 crc kubenswrapper[4793]: I0217 20:11:57.803537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b76c70fa-8f9c-4049-92db-0249b0832537","Type":"ContainerDied","Data":"9267d6235910c388affa2597b4e970c74b1ce24af94cb8e66f4146f0d3fda2a9"} Feb 17 20:11:58 crc kubenswrapper[4793]: I0217 20:11:58.077029 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4x565" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="registry-server" probeResult="failure" output=< Feb 17 20:11:58 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:11:58 crc kubenswrapper[4793]: > Feb 17 20:11:58 crc kubenswrapper[4793]: I0217 20:11:58.513972 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr48d"] Feb 17 20:11:58 crc kubenswrapper[4793]: I0217 20:11:58.519831 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gm5th" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="registry-server" probeResult="failure" output=< Feb 17 20:11:58 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:11:58 crc kubenswrapper[4793]: > Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.053610 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.079871 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76c70fa-8f9c-4049-92db-0249b0832537-kube-api-access\") pod \"b76c70fa-8f9c-4049-92db-0249b0832537\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.079952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76c70fa-8f9c-4049-92db-0249b0832537-kubelet-dir\") pod \"b76c70fa-8f9c-4049-92db-0249b0832537\" (UID: \"b76c70fa-8f9c-4049-92db-0249b0832537\") " Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.079994 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b76c70fa-8f9c-4049-92db-0249b0832537-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b76c70fa-8f9c-4049-92db-0249b0832537" (UID: "b76c70fa-8f9c-4049-92db-0249b0832537"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.080312 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b76c70fa-8f9c-4049-92db-0249b0832537-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.085637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76c70fa-8f9c-4049-92db-0249b0832537-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b76c70fa-8f9c-4049-92db-0249b0832537" (UID: "b76c70fa-8f9c-4049-92db-0249b0832537"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.180903 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b76c70fa-8f9c-4049-92db-0249b0832537-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.817416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b76c70fa-8f9c-4049-92db-0249b0832537","Type":"ContainerDied","Data":"882785d3a6537ea4ebd22433dda1a7eb9ae8fd3b9fd84e49896dcae3be8d0ed9"} Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.817506 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="882785d3a6537ea4ebd22433dda1a7eb9ae8fd3b9fd84e49896dcae3be8d0ed9" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.817442 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 20:11:59 crc kubenswrapper[4793]: I0217 20:11:59.817609 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xr48d" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="registry-server" containerID="cri-o://8a0a9cb088cf4c61e9b69b964821129ec73181ea80de2012323d4f3a7152a324" gracePeriod=2 Feb 17 20:12:00 crc kubenswrapper[4793]: I0217 20:12:00.838064 4793 generic.go:334] "Generic (PLEG): container finished" podID="e998269d-55e1-4dd7-b050-08983af268e6" containerID="8a0a9cb088cf4c61e9b69b964821129ec73181ea80de2012323d4f3a7152a324" exitCode=0 Feb 17 20:12:00 crc kubenswrapper[4793]: I0217 20:12:00.838113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerDied","Data":"8a0a9cb088cf4c61e9b69b964821129ec73181ea80de2012323d4f3a7152a324"} Feb 17 20:12:00 crc kubenswrapper[4793]: I0217 20:12:00.993851 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.111014 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twg64\" (UniqueName: \"kubernetes.io/projected/e998269d-55e1-4dd7-b050-08983af268e6-kube-api-access-twg64\") pod \"e998269d-55e1-4dd7-b050-08983af268e6\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.111093 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-catalog-content\") pod \"e998269d-55e1-4dd7-b050-08983af268e6\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.111217 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-utilities\") pod \"e998269d-55e1-4dd7-b050-08983af268e6\" (UID: \"e998269d-55e1-4dd7-b050-08983af268e6\") " Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.112088 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-utilities" (OuterVolumeSpecName: "utilities") pod "e998269d-55e1-4dd7-b050-08983af268e6" (UID: "e998269d-55e1-4dd7-b050-08983af268e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.122887 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e998269d-55e1-4dd7-b050-08983af268e6-kube-api-access-twg64" (OuterVolumeSpecName: "kube-api-access-twg64") pod "e998269d-55e1-4dd7-b050-08983af268e6" (UID: "e998269d-55e1-4dd7-b050-08983af268e6"). InnerVolumeSpecName "kube-api-access-twg64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.141984 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e998269d-55e1-4dd7-b050-08983af268e6" (UID: "e998269d-55e1-4dd7-b050-08983af268e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.212764 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.212808 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twg64\" (UniqueName: \"kubernetes.io/projected/e998269d-55e1-4dd7-b050-08983af268e6-kube-api-access-twg64\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.212823 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e998269d-55e1-4dd7-b050-08983af268e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.849326 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr48d" event={"ID":"e998269d-55e1-4dd7-b050-08983af268e6","Type":"ContainerDied","Data":"34aa63d3270bf9f1de1b0dc9c00170be6af90b542a03c8c797de3011c852758e"} Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.849388 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr48d" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.849730 4793 scope.go:117] "RemoveContainer" containerID="8a0a9cb088cf4c61e9b69b964821129ec73181ea80de2012323d4f3a7152a324" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.866103 4793 scope.go:117] "RemoveContainer" containerID="078885407b6b5044724031909b8d53b0a1ba1aafde3942ae9cb81559d5ba9b18" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.869960 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr48d"] Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.873523 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr48d"] Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.890437 4793 scope.go:117] "RemoveContainer" containerID="4c5727744dd6df074b2f021a1c72252f1f4d3cc7ae3c6e3e60af2b7714f7a3f7" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.945844 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 20:12:01 crc kubenswrapper[4793]: E0217 20:12:01.946082 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="extract-content" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946093 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="extract-content" Feb 17 20:12:01 crc kubenswrapper[4793]: E0217 20:12:01.946114 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76c70fa-8f9c-4049-92db-0249b0832537" containerName="pruner" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946121 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76c70fa-8f9c-4049-92db-0249b0832537" containerName="pruner" Feb 17 20:12:01 crc kubenswrapper[4793]: E0217 20:12:01.946130 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="registry-server" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946138 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="registry-server" Feb 17 20:12:01 crc kubenswrapper[4793]: E0217 20:12:01.946147 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="extract-utilities" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946153 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="extract-utilities" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946243 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e998269d-55e1-4dd7-b050-08983af268e6" containerName="registry-server" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946252 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76c70fa-8f9c-4049-92db-0249b0832537" containerName="pruner" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.946604 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.955918 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.955963 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 20:12:01 crc kubenswrapper[4793]: I0217 20:12:01.961421 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.038603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599fec2-8123-49ae-9668-df110bf07b3d-kube-api-access\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.038664 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.038715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-var-lock\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.139720 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.139829 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-var-lock\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.139884 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.139926 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-var-lock\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.139905 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599fec2-8123-49ae-9668-df110bf07b3d-kube-api-access\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.160200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599fec2-8123-49ae-9668-df110bf07b3d-kube-api-access\") pod \"installer-9-crc\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.267751 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.660752 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 20:12:02 crc kubenswrapper[4793]: W0217 20:12:02.669025 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6599fec2_8123_49ae_9668_df110bf07b3d.slice/crio-96e06cf0d355b29e5f44821a5f1ea102acfbe3b292a588a30aa2b43fa14b9c85 WatchSource:0}: Error finding container 96e06cf0d355b29e5f44821a5f1ea102acfbe3b292a588a30aa2b43fa14b9c85: Status 404 returned error can't find the container with id 96e06cf0d355b29e5f44821a5f1ea102acfbe3b292a588a30aa2b43fa14b9c85 Feb 17 20:12:02 crc kubenswrapper[4793]: I0217 20:12:02.857091 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6599fec2-8123-49ae-9668-df110bf07b3d","Type":"ContainerStarted","Data":"96e06cf0d355b29e5f44821a5f1ea102acfbe3b292a588a30aa2b43fa14b9c85"} Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.545778 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e998269d-55e1-4dd7-b050-08983af268e6" path="/var/lib/kubelet/pods/e998269d-55e1-4dd7-b050-08983af268e6/volumes" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.632365 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.632420 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.690219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.865361 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6599fec2-8123-49ae-9668-df110bf07b3d","Type":"ContainerStarted","Data":"6d2e61f5b4d6c86e5bcd39c8d37bed1fb2c053a5def1e4f96f55d4fc9cbb9c01"} Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.868947 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.869000 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.905735 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.923185 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.923156778 podStartE2EDuration="2.923156778s" podCreationTimestamp="2026-02-17 20:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:12:03.890184199 +0000 UTC m=+199.181882560" watchObservedRunningTime="2026-02-17 20:12:03.923156778 +0000 UTC m=+199.214855099" Feb 17 20:12:03 crc kubenswrapper[4793]: I0217 20:12:03.925111 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.099410 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.099472 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.136650 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.292266 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.292344 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.361520 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.914112 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.914195 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:12:04 crc kubenswrapper[4793]: I0217 20:12:04.920211 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:12:05 crc kubenswrapper[4793]: I0217 20:12:05.625462 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:12:05 crc kubenswrapper[4793]: I0217 20:12:05.625510 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:12:05 crc kubenswrapper[4793]: I0217 20:12:05.659537 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:12:05 crc kubenswrapper[4793]: I0217 20:12:05.917422 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:12:06 crc kubenswrapper[4793]: I0217 20:12:06.112956 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpdmp"] Feb 17 20:12:06 crc kubenswrapper[4793]: I0217 20:12:06.311235 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckt4v"] Feb 17 20:12:06 crc kubenswrapper[4793]: I0217 20:12:06.882843 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cpdmp" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="registry-server" containerID="cri-o://a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c" gracePeriod=2 Feb 17 20:12:06 crc kubenswrapper[4793]: I0217 20:12:06.882978 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ckt4v" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="registry-server" containerID="cri-o://254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef" gracePeriod=2 Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.078889 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.143321 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.423101 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.434976 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-utilities\") pod \"a94875ee-0253-456e-a8c8-68be4676bb88\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.435025 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s226d\" (UniqueName: \"kubernetes.io/projected/a94875ee-0253-456e-a8c8-68be4676bb88-kube-api-access-s226d\") pod \"a94875ee-0253-456e-a8c8-68be4676bb88\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.435072 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-catalog-content\") pod \"a94875ee-0253-456e-a8c8-68be4676bb88\" (UID: \"a94875ee-0253-456e-a8c8-68be4676bb88\") " Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.437246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-utilities" (OuterVolumeSpecName: "utilities") pod "a94875ee-0253-456e-a8c8-68be4676bb88" (UID: "a94875ee-0253-456e-a8c8-68be4676bb88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.448774 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94875ee-0253-456e-a8c8-68be4676bb88-kube-api-access-s226d" (OuterVolumeSpecName: "kube-api-access-s226d") pod "a94875ee-0253-456e-a8c8-68be4676bb88" (UID: "a94875ee-0253-456e-a8c8-68be4676bb88"). InnerVolumeSpecName "kube-api-access-s226d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.491663 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.520456 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a94875ee-0253-456e-a8c8-68be4676bb88" (UID: "a94875ee-0253-456e-a8c8-68be4676bb88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.527142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.536324 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zfbd\" (UniqueName: \"kubernetes.io/projected/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-kube-api-access-5zfbd\") pod \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.536407 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-catalog-content\") pod \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.536523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-utilities\") pod \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\" (UID: \"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89\") " Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.536784 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.536802 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94875ee-0253-456e-a8c8-68be4676bb88-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.536813 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s226d\" (UniqueName: \"kubernetes.io/projected/a94875ee-0253-456e-a8c8-68be4676bb88-kube-api-access-s226d\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.537854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-utilities" (OuterVolumeSpecName: "utilities") pod "6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" (UID: "6030c7c6-83d3-4ee2-95de-2c82b9cf2e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.545969 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-kube-api-access-5zfbd" (OuterVolumeSpecName: "kube-api-access-5zfbd") pod "6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" (UID: "6030c7c6-83d3-4ee2-95de-2c82b9cf2e89"). InnerVolumeSpecName "kube-api-access-5zfbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.572712 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.614292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" (UID: "6030c7c6-83d3-4ee2-95de-2c82b9cf2e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.637717 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zfbd\" (UniqueName: \"kubernetes.io/projected/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-kube-api-access-5zfbd\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.637750 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.637761 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.890063 4793 generic.go:334] "Generic (PLEG): container finished" podID="a94875ee-0253-456e-a8c8-68be4676bb88" containerID="a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c" exitCode=0 Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.890127 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerDied","Data":"a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c"} Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.890157 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpdmp" event={"ID":"a94875ee-0253-456e-a8c8-68be4676bb88","Type":"ContainerDied","Data":"0428207f4a3c2dfe256176d1938f384bf32d3948b13f249964b245bb010897d0"} Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.890188 4793 scope.go:117] "RemoveContainer" containerID="a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.890299 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpdmp" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.893562 4793 generic.go:334] "Generic (PLEG): container finished" podID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerID="254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef" exitCode=0 Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.894072 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckt4v" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.894174 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerDied","Data":"254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef"} Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.894204 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckt4v" event={"ID":"6030c7c6-83d3-4ee2-95de-2c82b9cf2e89","Type":"ContainerDied","Data":"2f108651de56abfc35f97178e34180f791b29aaa1cf5ed2e3434bef525f80954"} Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.912900 4793 scope.go:117] "RemoveContainer" containerID="0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.914403 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpdmp"] Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.918465 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cpdmp"] Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.933750 4793 scope.go:117] "RemoveContainer" containerID="bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.934344 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckt4v"] Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.938327 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ckt4v"] Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.955340 4793 scope.go:117] "RemoveContainer" containerID="a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c" Feb 17 20:12:07 crc kubenswrapper[4793]: E0217 20:12:07.955777 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c\": container with ID starting with a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c not found: ID does not exist" containerID="a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.955814 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c"} err="failed to get container status \"a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c\": rpc error: code = NotFound desc = could not find container \"a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c\": container with ID starting with a6b17b9aa1bd8c7f5e9ba846803aca1dbffee72ccd85801c85feabe11037973c not found: ID does not exist" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.955840 4793 scope.go:117] "RemoveContainer" containerID="0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7" Feb 17 20:12:07 crc kubenswrapper[4793]: E0217 20:12:07.956121 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7\": container with ID starting with 0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7 not found: ID does not exist" containerID="0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.956162 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7"} err="failed to get container status \"0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7\": rpc error: code = NotFound desc = could not find container \"0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7\": container with ID starting with 0616e3c90ca60bf3175b34672c22624742e4a15164e445a959c9034522b647a7 not found: ID does not exist" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.956189 4793 scope.go:117] "RemoveContainer" containerID="bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d" Feb 17 20:12:07 crc kubenswrapper[4793]: E0217 20:12:07.957139 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d\": container with ID starting with bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d not found: ID does not exist" containerID="bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.957174 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d"} err="failed to get container status \"bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d\": rpc error: code = NotFound desc = could not find container \"bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d\": container with ID starting with bced0582662ca5e60765b36b7fff3d0b32bc0c5a67218101e3c4cb5062bc142d not found: ID does not exist" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.957194 4793 scope.go:117] "RemoveContainer" containerID="254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.973242 4793 scope.go:117] "RemoveContainer" containerID="8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9" Feb 17 20:12:07 crc kubenswrapper[4793]: I0217 20:12:07.984777 4793 scope.go:117] "RemoveContainer" containerID="068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509" Feb 17 20:12:08 crc kubenswrapper[4793]: I0217 20:12:08.003269 4793 scope.go:117] "RemoveContainer" containerID="254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef" Feb 17 20:12:08 crc kubenswrapper[4793]: E0217 20:12:08.003809 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef\": container with ID starting with 254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef not found: ID does not exist" containerID="254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef" Feb 17 20:12:08 crc kubenswrapper[4793]: I0217 20:12:08.003866 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef"} err="failed to get container status \"254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef\": rpc error: code = NotFound desc = could not find container \"254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef\": container with ID starting with 254229b959f5898c0af32cbfc28a1d7933b74e7a953d24c3a9592926511786ef not found: ID does not exist" Feb 17 20:12:08 crc kubenswrapper[4793]: I0217 20:12:08.003914 4793 scope.go:117] "RemoveContainer" containerID="8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9" Feb 17 20:12:08 crc kubenswrapper[4793]: E0217 20:12:08.004351 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9\": container with ID starting with 8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9 not found: ID does not exist" containerID="8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9" Feb 17 20:12:08 crc kubenswrapper[4793]: I0217 20:12:08.004391 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9"} err="failed to get container status \"8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9\": rpc error: code = NotFound desc = could not find container \"8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9\": container with ID starting with 8c0cf01e9ff1e36e291259617c38b93699bc1ad709be0779bd2c1e6d4102edd9 not found: ID does not exist" Feb 17 20:12:08 crc kubenswrapper[4793]: I0217 20:12:08.004416 4793 scope.go:117] "RemoveContainer" containerID="068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509" Feb 17 20:12:08 crc kubenswrapper[4793]: E0217 20:12:08.004721 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509\": container with ID starting with 068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509 not found: ID does not exist" containerID="068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509" Feb 17 20:12:08 crc kubenswrapper[4793]: I0217 20:12:08.004765 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509"} err="failed to get container status \"068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509\": rpc error: code = NotFound desc = could not find container \"068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509\": container with ID starting with 068922ab91b0fd1713c6f44153705ca73c06a61ef5bf2511624397c1be20e509 not found: ID does not exist" Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.547007 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" path="/var/lib/kubelet/pods/6030c7c6-83d3-4ee2-95de-2c82b9cf2e89/volumes" Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.548236 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" path="/var/lib/kubelet/pods/a94875ee-0253-456e-a8c8-68be4676bb88/volumes" Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.658973 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86554fb6bd-cl2hq"] Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.659222 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" podUID="0edc34f7-8747-4335-8a87-94d585057ac3" containerName="controller-manager" containerID="cri-o://d4194fdc7cbc2dbf44a63c6a046de7dd46551f267e7ad8e06cb4e490bc89e076" gracePeriod=30 Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.680067 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck"] Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.680328 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" podUID="aef3e13c-918a-4296-8b80-109a6ead0eea" containerName="route-controller-manager" containerID="cri-o://fbaabfa9de919615a31d854891ee8a9168f956b514173a9088922aa0c5854ca0" gracePeriod=30 Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.909480 4793 generic.go:334] "Generic (PLEG): container finished" podID="0edc34f7-8747-4335-8a87-94d585057ac3" containerID="d4194fdc7cbc2dbf44a63c6a046de7dd46551f267e7ad8e06cb4e490bc89e076" exitCode=0 Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.909573 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" event={"ID":"0edc34f7-8747-4335-8a87-94d585057ac3","Type":"ContainerDied","Data":"d4194fdc7cbc2dbf44a63c6a046de7dd46551f267e7ad8e06cb4e490bc89e076"} Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.911079 4793 generic.go:334] "Generic (PLEG): container finished" podID="aef3e13c-918a-4296-8b80-109a6ead0eea" containerID="fbaabfa9de919615a31d854891ee8a9168f956b514173a9088922aa0c5854ca0" exitCode=0 Feb 17 20:12:09 crc kubenswrapper[4793]: I0217 20:12:09.911116 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" event={"ID":"aef3e13c-918a-4296-8b80-109a6ead0eea","Type":"ContainerDied","Data":"fbaabfa9de919615a31d854891ee8a9168f956b514173a9088922aa0c5854ca0"} Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.166141 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.172955 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277281 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r62pv\" (UniqueName: \"kubernetes.io/projected/0edc34f7-8747-4335-8a87-94d585057ac3-kube-api-access-r62pv\") pod \"0edc34f7-8747-4335-8a87-94d585057ac3\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-config\") pod \"0edc34f7-8747-4335-8a87-94d585057ac3\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277401 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-client-ca\") pod \"aef3e13c-918a-4296-8b80-109a6ead0eea\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277482 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-config\") pod \"aef3e13c-918a-4296-8b80-109a6ead0eea\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277518 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef3e13c-918a-4296-8b80-109a6ead0eea-serving-cert\") pod \"aef3e13c-918a-4296-8b80-109a6ead0eea\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277584 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-client-ca\") pod \"0edc34f7-8747-4335-8a87-94d585057ac3\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6c6m\" (UniqueName: \"kubernetes.io/projected/aef3e13c-918a-4296-8b80-109a6ead0eea-kube-api-access-n6c6m\") pod \"aef3e13c-918a-4296-8b80-109a6ead0eea\" (UID: \"aef3e13c-918a-4296-8b80-109a6ead0eea\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277664 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-proxy-ca-bundles\") pod \"0edc34f7-8747-4335-8a87-94d585057ac3\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.277721 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0edc34f7-8747-4335-8a87-94d585057ac3-serving-cert\") pod \"0edc34f7-8747-4335-8a87-94d585057ac3\" (UID: \"0edc34f7-8747-4335-8a87-94d585057ac3\") " Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.278492 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-client-ca" (OuterVolumeSpecName: "client-ca") pod "aef3e13c-918a-4296-8b80-109a6ead0eea" (UID: "aef3e13c-918a-4296-8b80-109a6ead0eea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.278566 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-config" (OuterVolumeSpecName: "config") pod "aef3e13c-918a-4296-8b80-109a6ead0eea" (UID: "aef3e13c-918a-4296-8b80-109a6ead0eea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.278637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0edc34f7-8747-4335-8a87-94d585057ac3" (UID: "0edc34f7-8747-4335-8a87-94d585057ac3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.278726 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-config" (OuterVolumeSpecName: "config") pod "0edc34f7-8747-4335-8a87-94d585057ac3" (UID: "0edc34f7-8747-4335-8a87-94d585057ac3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.279526 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-client-ca" (OuterVolumeSpecName: "client-ca") pod "0edc34f7-8747-4335-8a87-94d585057ac3" (UID: "0edc34f7-8747-4335-8a87-94d585057ac3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.284988 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef3e13c-918a-4296-8b80-109a6ead0eea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aef3e13c-918a-4296-8b80-109a6ead0eea" (UID: "aef3e13c-918a-4296-8b80-109a6ead0eea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.285001 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef3e13c-918a-4296-8b80-109a6ead0eea-kube-api-access-n6c6m" (OuterVolumeSpecName: "kube-api-access-n6c6m") pod "aef3e13c-918a-4296-8b80-109a6ead0eea" (UID: "aef3e13c-918a-4296-8b80-109a6ead0eea"). InnerVolumeSpecName "kube-api-access-n6c6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.285022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edc34f7-8747-4335-8a87-94d585057ac3-kube-api-access-r62pv" (OuterVolumeSpecName: "kube-api-access-r62pv") pod "0edc34f7-8747-4335-8a87-94d585057ac3" (UID: "0edc34f7-8747-4335-8a87-94d585057ac3"). InnerVolumeSpecName "kube-api-access-r62pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.285730 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0edc34f7-8747-4335-8a87-94d585057ac3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0edc34f7-8747-4335-8a87-94d585057ac3" (UID: "0edc34f7-8747-4335-8a87-94d585057ac3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378611 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378654 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6c6m\" (UniqueName: \"kubernetes.io/projected/aef3e13c-918a-4296-8b80-109a6ead0eea-kube-api-access-n6c6m\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378675 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378704 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0edc34f7-8747-4335-8a87-94d585057ac3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378745 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r62pv\" (UniqueName: \"kubernetes.io/projected/0edc34f7-8747-4335-8a87-94d585057ac3-kube-api-access-r62pv\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378757 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0edc34f7-8747-4335-8a87-94d585057ac3-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378767 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378777 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef3e13c-918a-4296-8b80-109a6ead0eea-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.378787 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef3e13c-918a-4296-8b80-109a6ead0eea-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.715321 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gm5th"] Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.715657 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gm5th" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="registry-server" containerID="cri-o://77de170bb0dc19985fbe01d609df7c308129da3e3f7743903c42893a816f7ddf" gracePeriod=2 Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.916872 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.916875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86554fb6bd-cl2hq" event={"ID":"0edc34f7-8747-4335-8a87-94d585057ac3","Type":"ContainerDied","Data":"4ede745c4f34a21e08e8161ded9f559172e04f1dccc665056fa1bbae8aab0fa0"} Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.917030 4793 scope.go:117] "RemoveContainer" containerID="d4194fdc7cbc2dbf44a63c6a046de7dd46551f267e7ad8e06cb4e490bc89e076" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.918948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" event={"ID":"aef3e13c-918a-4296-8b80-109a6ead0eea","Type":"ContainerDied","Data":"57f4a2b85237938309b363486d32d9742e2962cad0db24e9d7114e1b128d9057"} Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.919041 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.924422 4793 generic.go:334] "Generic (PLEG): container finished" podID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerID="77de170bb0dc19985fbe01d609df7c308129da3e3f7743903c42893a816f7ddf" exitCode=0 Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.924460 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm5th" event={"ID":"e0eded1f-cb89-4a85-9e58-59ccef72584b","Type":"ContainerDied","Data":"77de170bb0dc19985fbe01d609df7c308129da3e3f7743903c42893a816f7ddf"} Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.939851 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5"] Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940040 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="extract-content" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940052 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="extract-content" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940066 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef3e13c-918a-4296-8b80-109a6ead0eea" containerName="route-controller-manager" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940072 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef3e13c-918a-4296-8b80-109a6ead0eea" containerName="route-controller-manager" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940081 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0edc34f7-8747-4335-8a87-94d585057ac3" containerName="controller-manager" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940088 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0edc34f7-8747-4335-8a87-94d585057ac3" containerName="controller-manager" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940105 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="extract-utilities" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940112 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="extract-utilities" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940121 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="extract-utilities" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940127 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="extract-utilities" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940137 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="extract-content" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="extract-content" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940151 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="registry-server" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940157 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="registry-server" Feb 17 20:12:10 crc kubenswrapper[4793]: E0217 20:12:10.940165 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="registry-server" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940172 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="registry-server" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940256 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0edc34f7-8747-4335-8a87-94d585057ac3" containerName="controller-manager" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940265 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef3e13c-918a-4296-8b80-109a6ead0eea" containerName="route-controller-manager" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940273 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6030c7c6-83d3-4ee2-95de-2c82b9cf2e89" containerName="registry-server" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940280 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94875ee-0253-456e-a8c8-68be4676bb88" containerName="registry-server" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.940619 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.944002 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.944814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.944934 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.944997 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.945047 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.945047 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.950478 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-674b7675f4-dvskg"] Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.951532 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.955864 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5"] Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.957561 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.957737 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.957886 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.958032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.958179 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.958717 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.962149 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.963782 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674b7675f4-dvskg"] Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.987060 4793 scope.go:117] "RemoveContainer" containerID="fbaabfa9de919615a31d854891ee8a9168f956b514173a9088922aa0c5854ca0" Feb 17 20:12:10 crc kubenswrapper[4793]: I0217 20:12:10.996502 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86554fb6bd-cl2hq"] Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-proxy-ca-bundles\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001666 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-client-ca\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001786 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-config\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001815 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-client-ca\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001885 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86554fb6bd-cl2hq"] Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001975 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck"] Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.001922 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrk5\" (UniqueName: \"kubernetes.io/projected/c807689a-74be-4449-ba8f-e0a20d565829-kube-api-access-mhrk5\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.002095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-config\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.002203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bad4ef-46c5-4932-9490-14c92b2fbdc6-serving-cert\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.002267 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c807689a-74be-4449-ba8f-e0a20d565829-serving-cert\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.002352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vzz\" (UniqueName: \"kubernetes.io/projected/51bad4ef-46c5-4932-9490-14c92b2fbdc6-kube-api-access-p2vzz\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.011044 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6988cbfb-h8hck"] Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.103980 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2vzz\" (UniqueName: \"kubernetes.io/projected/51bad4ef-46c5-4932-9490-14c92b2fbdc6-kube-api-access-p2vzz\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-proxy-ca-bundles\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-client-ca\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-client-ca\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-config\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104478 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrk5\" (UniqueName: \"kubernetes.io/projected/c807689a-74be-4449-ba8f-e0a20d565829-kube-api-access-mhrk5\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104512 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-config\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bad4ef-46c5-4932-9490-14c92b2fbdc6-serving-cert\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.104782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c807689a-74be-4449-ba8f-e0a20d565829-serving-cert\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.106147 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-config\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.106388 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-client-ca\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.107022 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-config\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.107317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-client-ca\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.108888 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-proxy-ca-bundles\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.110843 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c807689a-74be-4449-ba8f-e0a20d565829-serving-cert\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.124281 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bad4ef-46c5-4932-9490-14c92b2fbdc6-serving-cert\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.128092 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrk5\" (UniqueName: \"kubernetes.io/projected/c807689a-74be-4449-ba8f-e0a20d565829-kube-api-access-mhrk5\") pod \"controller-manager-674b7675f4-dvskg\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.134769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2vzz\" (UniqueName: \"kubernetes.io/projected/51bad4ef-46c5-4932-9490-14c92b2fbdc6-kube-api-access-p2vzz\") pod \"route-controller-manager-786b797dc4-tzht5\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.186870 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.268800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.286963 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.308533 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-utilities\") pod \"e0eded1f-cb89-4a85-9e58-59ccef72584b\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.308625 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wtm9\" (UniqueName: \"kubernetes.io/projected/e0eded1f-cb89-4a85-9e58-59ccef72584b-kube-api-access-7wtm9\") pod \"e0eded1f-cb89-4a85-9e58-59ccef72584b\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.308686 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-catalog-content\") pod \"e0eded1f-cb89-4a85-9e58-59ccef72584b\" (UID: \"e0eded1f-cb89-4a85-9e58-59ccef72584b\") " Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.310958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-utilities" (OuterVolumeSpecName: "utilities") pod "e0eded1f-cb89-4a85-9e58-59ccef72584b" (UID: "e0eded1f-cb89-4a85-9e58-59ccef72584b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.311810 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0eded1f-cb89-4a85-9e58-59ccef72584b-kube-api-access-7wtm9" (OuterVolumeSpecName: "kube-api-access-7wtm9") pod "e0eded1f-cb89-4a85-9e58-59ccef72584b" (UID: "e0eded1f-cb89-4a85-9e58-59ccef72584b"). InnerVolumeSpecName "kube-api-access-7wtm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.410578 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wtm9\" (UniqueName: \"kubernetes.io/projected/e0eded1f-cb89-4a85-9e58-59ccef72584b-kube-api-access-7wtm9\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.410615 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.428854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0eded1f-cb89-4a85-9e58-59ccef72584b" (UID: "e0eded1f-cb89-4a85-9e58-59ccef72584b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.491392 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-674b7675f4-dvskg"] Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.512085 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0eded1f-cb89-4a85-9e58-59ccef72584b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.546948 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0edc34f7-8747-4335-8a87-94d585057ac3" path="/var/lib/kubelet/pods/0edc34f7-8747-4335-8a87-94d585057ac3/volumes" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.547905 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef3e13c-918a-4296-8b80-109a6ead0eea" path="/var/lib/kubelet/pods/aef3e13c-918a-4296-8b80-109a6ead0eea/volumes" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.671967 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5"] Feb 17 20:12:11 crc kubenswrapper[4793]: W0217 20:12:11.681041 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51bad4ef_46c5_4932_9490_14c92b2fbdc6.slice/crio-5def0af59f2dd0c32224f8b8ee4a7bccdfd5821ea995e8ec7ca4783fc6339bdd WatchSource:0}: Error finding container 5def0af59f2dd0c32224f8b8ee4a7bccdfd5821ea995e8ec7ca4783fc6339bdd: Status 404 returned error can't find the container with id 5def0af59f2dd0c32224f8b8ee4a7bccdfd5821ea995e8ec7ca4783fc6339bdd Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.930593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" event={"ID":"51bad4ef-46c5-4932-9490-14c92b2fbdc6","Type":"ContainerStarted","Data":"dd50a5568f73fdfd4f508cff0c53b08cc5e9c06a0686a5b57ae503d5c33bd6e9"} Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.930638 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" event={"ID":"51bad4ef-46c5-4932-9490-14c92b2fbdc6","Type":"ContainerStarted","Data":"5def0af59f2dd0c32224f8b8ee4a7bccdfd5821ea995e8ec7ca4783fc6339bdd"} Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.930730 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.932805 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gm5th" event={"ID":"e0eded1f-cb89-4a85-9e58-59ccef72584b","Type":"ContainerDied","Data":"32abd46e00e3636a11d5e2dba71a035aad090991a39f14576daddbf8ac74964d"} Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.932836 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gm5th" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.932852 4793 scope.go:117] "RemoveContainer" containerID="77de170bb0dc19985fbe01d609df7c308129da3e3f7743903c42893a816f7ddf" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.936023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" event={"ID":"c807689a-74be-4449-ba8f-e0a20d565829","Type":"ContainerStarted","Data":"7c697b9e944417ddea7c768b2e7ffdd6f9fd7d8aadeb6eb492983827bc801bae"} Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.936074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" event={"ID":"c807689a-74be-4449-ba8f-e0a20d565829","Type":"ContainerStarted","Data":"a0ef304fff6e33ef47669cffbcbe2d5e2ad790010e36a97ded5e2b8a2575655f"} Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.936093 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.946278 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.947854 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" podStartSLOduration=2.9478388300000002 podStartE2EDuration="2.94783883s" podCreationTimestamp="2026-02-17 20:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:12:11.946525599 +0000 UTC m=+207.238223910" watchObservedRunningTime="2026-02-17 20:12:11.94783883 +0000 UTC m=+207.239537141" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.949121 4793 scope.go:117] "RemoveContainer" containerID="d7230bc7aadfbf84fe7faeee0589d0dda3beb7e449a4cea09e883c57c216ce0c" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.965359 4793 scope.go:117] "RemoveContainer" containerID="fe1bc3f46dbe0ea81ba34879db085393120eed3c666e25b8f4eb67bb637de560" Feb 17 20:12:11 crc kubenswrapper[4793]: I0217 20:12:11.994954 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" podStartSLOduration=2.9949350839999997 podStartE2EDuration="2.994935084s" podCreationTimestamp="2026-02-17 20:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:12:11.992480397 +0000 UTC m=+207.284178708" watchObservedRunningTime="2026-02-17 20:12:11.994935084 +0000 UTC m=+207.286633395" Feb 17 20:12:12 crc kubenswrapper[4793]: I0217 20:12:12.006495 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gm5th"] Feb 17 20:12:12 crc kubenswrapper[4793]: I0217 20:12:12.008889 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gm5th"] Feb 17 20:12:12 crc kubenswrapper[4793]: I0217 20:12:12.460944 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:13 crc kubenswrapper[4793]: I0217 20:12:13.545293 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" path="/var/lib/kubelet/pods/e0eded1f-cb89-4a85-9e58-59ccef72584b/volumes" Feb 17 20:12:20 crc kubenswrapper[4793]: I0217 20:12:20.101757 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:12:20 crc kubenswrapper[4793]: I0217 20:12:20.102384 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:12:20 crc kubenswrapper[4793]: I0217 20:12:20.102443 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:12:20 crc kubenswrapper[4793]: I0217 20:12:20.103186 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:12:20 crc kubenswrapper[4793]: I0217 20:12:20.103248 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7" gracePeriod=600 Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.004576 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7" exitCode=0 Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.004645 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7"} Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.004982 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"3a474aac3a35cef45f7adaa960db8f39172d3e9c658e5e9a60808d964d835508"} Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.188781 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" podUID="26565b8e-93a3-4682-8c20-ee6cb2319543" containerName="oauth-openshift" containerID="cri-o://80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d" gracePeriod=15 Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.628365 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754264 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-error\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-idp-0-file-data\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754391 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-router-certs\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754411 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-ocp-branding-template\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754429 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-service-ca\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754447 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52mjb\" (UniqueName: \"kubernetes.io/projected/26565b8e-93a3-4682-8c20-ee6cb2319543-kube-api-access-52mjb\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-serving-cert\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754499 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-provider-selection\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754525 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-cliconfig\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754546 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-dir\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-policies\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754607 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-login\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-trusted-ca-bundle\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.754640 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-session\") pod \"26565b8e-93a3-4682-8c20-ee6cb2319543\" (UID: \"26565b8e-93a3-4682-8c20-ee6cb2319543\") " Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.755246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.755310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.755367 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.755567 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.755917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.761094 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.761377 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.761786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.763347 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.763980 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26565b8e-93a3-4682-8c20-ee6cb2319543-kube-api-access-52mjb" (OuterVolumeSpecName: "kube-api-access-52mjb") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "kube-api-access-52mjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.764065 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.764364 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.764714 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.765081 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "26565b8e-93a3-4682-8c20-ee6cb2319543" (UID: "26565b8e-93a3-4682-8c20-ee6cb2319543"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856134 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856192 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856211 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856227 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856246 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856263 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52mjb\" (UniqueName: \"kubernetes.io/projected/26565b8e-93a3-4682-8c20-ee6cb2319543-kube-api-access-52mjb\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856281 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856301 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856321 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856339 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856354 4793 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856370 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856386 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:21 crc kubenswrapper[4793]: I0217 20:12:21.856403 4793 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26565b8e-93a3-4682-8c20-ee6cb2319543-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.021225 4793 generic.go:334] "Generic (PLEG): container finished" podID="26565b8e-93a3-4682-8c20-ee6cb2319543" containerID="80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d" exitCode=0 Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.021283 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" event={"ID":"26565b8e-93a3-4682-8c20-ee6cb2319543","Type":"ContainerDied","Data":"80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d"} Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.021319 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" event={"ID":"26565b8e-93a3-4682-8c20-ee6cb2319543","Type":"ContainerDied","Data":"725b1aeb35c64519353ae002bc920fe0faf9670d8b1943dce2274dd29ae52020"} Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.021339 4793 scope.go:117] "RemoveContainer" containerID="80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d" Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.021363 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bgzw" Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.049982 4793 scope.go:117] "RemoveContainer" containerID="80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d" Feb 17 20:12:22 crc kubenswrapper[4793]: E0217 20:12:22.050902 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d\": container with ID starting with 80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d not found: ID does not exist" containerID="80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d" Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.050947 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d"} err="failed to get container status \"80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d\": rpc error: code = NotFound desc = could not find container \"80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d\": container with ID starting with 80941d0c0f65d5a181e575178bda0446698179c296ab242ad7b979e3d809592d not found: ID does not exist" Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.069490 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bgzw"] Feb 17 20:12:22 crc kubenswrapper[4793]: I0217 20:12:22.072629 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bgzw"] Feb 17 20:12:23 crc kubenswrapper[4793]: I0217 20:12:23.544836 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26565b8e-93a3-4682-8c20-ee6cb2319543" path="/var/lib/kubelet/pods/26565b8e-93a3-4682-8c20-ee6cb2319543/volumes" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.955021 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c4995446c-gs6hl"] Feb 17 20:12:25 crc kubenswrapper[4793]: E0217 20:12:25.955734 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="registry-server" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.955750 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="registry-server" Feb 17 20:12:25 crc kubenswrapper[4793]: E0217 20:12:25.955774 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="extract-content" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.955784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="extract-content" Feb 17 20:12:25 crc kubenswrapper[4793]: E0217 20:12:25.955798 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26565b8e-93a3-4682-8c20-ee6cb2319543" containerName="oauth-openshift" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.955806 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="26565b8e-93a3-4682-8c20-ee6cb2319543" containerName="oauth-openshift" Feb 17 20:12:25 crc kubenswrapper[4793]: E0217 20:12:25.955916 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="extract-utilities" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.955928 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="extract-utilities" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.956094 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0eded1f-cb89-4a85-9e58-59ccef72584b" containerName="registry-server" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.956118 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="26565b8e-93a3-4682-8c20-ee6cb2319543" containerName="oauth-openshift" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.956811 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.959144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.959423 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.960545 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.964425 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.965013 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.965620 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.965847 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.965975 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.966217 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.966587 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.966941 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.967888 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.977672 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c4995446c-gs6hl"] Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.989438 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.990133 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 20:12:25 crc kubenswrapper[4793]: I0217 20:12:25.999682 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.009934 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-error\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010266 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010456 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-login\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010555 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-session\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d37d323-78e2-4c91-9b70-b9c3f3306a85-audit-dir\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010824 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.010987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwvm\" (UniqueName: \"kubernetes.io/projected/2d37d323-78e2-4c91-9b70-b9c3f3306a85-kube-api-access-7fwvm\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.011028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.011068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.011103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.011143 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-audit-policies\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.011205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.113096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.113523 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwvm\" (UniqueName: \"kubernetes.io/projected/2d37d323-78e2-4c91-9b70-b9c3f3306a85-kube-api-access-7fwvm\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.113790 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.114057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.114321 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.114543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-audit-policies\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.114814 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.115106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-error\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.115365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.114348 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.115431 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-audit-policies\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.115669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-login\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.116199 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-session\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.116750 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.117009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d37d323-78e2-4c91-9b70-b9c3f3306a85-audit-dir\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.117239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.116490 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.117877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d37d323-78e2-4c91-9b70-b9c3f3306a85-audit-dir\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.118631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-error\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.119046 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.119180 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.119729 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.120651 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-session\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.120709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.120797 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.121764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-user-template-login\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.121951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d37d323-78e2-4c91-9b70-b9c3f3306a85-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.135908 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwvm\" (UniqueName: \"kubernetes.io/projected/2d37d323-78e2-4c91-9b70-b9c3f3306a85-kube-api-access-7fwvm\") pod \"oauth-openshift-7c4995446c-gs6hl\" (UID: \"2d37d323-78e2-4c91-9b70-b9c3f3306a85\") " pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.278243 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:26 crc kubenswrapper[4793]: I0217 20:12:26.667707 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c4995446c-gs6hl"] Feb 17 20:12:27 crc kubenswrapper[4793]: I0217 20:12:27.057973 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" event={"ID":"2d37d323-78e2-4c91-9b70-b9c3f3306a85","Type":"ContainerStarted","Data":"01b776908d6b842f3823ac699e647037aec5e622119e56a41c3f4448ac721218"} Feb 17 20:12:27 crc kubenswrapper[4793]: I0217 20:12:27.059184 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" event={"ID":"2d37d323-78e2-4c91-9b70-b9c3f3306a85","Type":"ContainerStarted","Data":"fa2702845c2a1c42ad7fe7c49cb05a2ff7b28764c29195d9dfa5149483d0eac2"} Feb 17 20:12:27 crc kubenswrapper[4793]: I0217 20:12:27.059273 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:27 crc kubenswrapper[4793]: I0217 20:12:27.077782 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" podStartSLOduration=31.077763297 podStartE2EDuration="31.077763297s" podCreationTimestamp="2026-02-17 20:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:12:27.076019672 +0000 UTC m=+222.367717993" watchObservedRunningTime="2026-02-17 20:12:27.077763297 +0000 UTC m=+222.369461608" Feb 17 20:12:27 crc kubenswrapper[4793]: I0217 20:12:27.778929 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c4995446c-gs6hl" Feb 17 20:12:29 crc kubenswrapper[4793]: I0217 20:12:29.695201 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674b7675f4-dvskg"] Feb 17 20:12:29 crc kubenswrapper[4793]: I0217 20:12:29.695844 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" podUID="c807689a-74be-4449-ba8f-e0a20d565829" containerName="controller-manager" containerID="cri-o://7c697b9e944417ddea7c768b2e7ffdd6f9fd7d8aadeb6eb492983827bc801bae" gracePeriod=30 Feb 17 20:12:29 crc kubenswrapper[4793]: I0217 20:12:29.770531 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5"] Feb 17 20:12:29 crc kubenswrapper[4793]: I0217 20:12:29.770748 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" podUID="51bad4ef-46c5-4932-9490-14c92b2fbdc6" containerName="route-controller-manager" containerID="cri-o://dd50a5568f73fdfd4f508cff0c53b08cc5e9c06a0686a5b57ae503d5c33bd6e9" gracePeriod=30 Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.075739 4793 generic.go:334] "Generic (PLEG): container finished" podID="c807689a-74be-4449-ba8f-e0a20d565829" containerID="7c697b9e944417ddea7c768b2e7ffdd6f9fd7d8aadeb6eb492983827bc801bae" exitCode=0 Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.075854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" event={"ID":"c807689a-74be-4449-ba8f-e0a20d565829","Type":"ContainerDied","Data":"7c697b9e944417ddea7c768b2e7ffdd6f9fd7d8aadeb6eb492983827bc801bae"} Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.078191 4793 generic.go:334] "Generic (PLEG): container finished" podID="51bad4ef-46c5-4932-9490-14c92b2fbdc6" containerID="dd50a5568f73fdfd4f508cff0c53b08cc5e9c06a0686a5b57ae503d5c33bd6e9" exitCode=0 Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.078244 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" event={"ID":"51bad4ef-46c5-4932-9490-14c92b2fbdc6","Type":"ContainerDied","Data":"dd50a5568f73fdfd4f508cff0c53b08cc5e9c06a0686a5b57ae503d5c33bd6e9"} Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.286893 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.293435 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373364 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrk5\" (UniqueName: \"kubernetes.io/projected/c807689a-74be-4449-ba8f-e0a20d565829-kube-api-access-mhrk5\") pod \"c807689a-74be-4449-ba8f-e0a20d565829\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373415 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-client-ca\") pod \"c807689a-74be-4449-ba8f-e0a20d565829\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373442 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-config\") pod \"c807689a-74be-4449-ba8f-e0a20d565829\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373489 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bad4ef-46c5-4932-9490-14c92b2fbdc6-serving-cert\") pod \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373535 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-proxy-ca-bundles\") pod \"c807689a-74be-4449-ba8f-e0a20d565829\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373553 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c807689a-74be-4449-ba8f-e0a20d565829-serving-cert\") pod \"c807689a-74be-4449-ba8f-e0a20d565829\" (UID: \"c807689a-74be-4449-ba8f-e0a20d565829\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2vzz\" (UniqueName: \"kubernetes.io/projected/51bad4ef-46c5-4932-9490-14c92b2fbdc6-kube-api-access-p2vzz\") pod \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373617 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-config\") pod \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.373645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-client-ca\") pod \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\" (UID: \"51bad4ef-46c5-4932-9490-14c92b2fbdc6\") " Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.375649 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-client-ca" (OuterVolumeSpecName: "client-ca") pod "51bad4ef-46c5-4932-9490-14c92b2fbdc6" (UID: "51bad4ef-46c5-4932-9490-14c92b2fbdc6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.375753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-config" (OuterVolumeSpecName: "config") pod "51bad4ef-46c5-4932-9490-14c92b2fbdc6" (UID: "51bad4ef-46c5-4932-9490-14c92b2fbdc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.376508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c807689a-74be-4449-ba8f-e0a20d565829" (UID: "c807689a-74be-4449-ba8f-e0a20d565829"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.376879 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-config" (OuterVolumeSpecName: "config") pod "c807689a-74be-4449-ba8f-e0a20d565829" (UID: "c807689a-74be-4449-ba8f-e0a20d565829"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.380339 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-client-ca" (OuterVolumeSpecName: "client-ca") pod "c807689a-74be-4449-ba8f-e0a20d565829" (UID: "c807689a-74be-4449-ba8f-e0a20d565829"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.383549 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c807689a-74be-4449-ba8f-e0a20d565829-kube-api-access-mhrk5" (OuterVolumeSpecName: "kube-api-access-mhrk5") pod "c807689a-74be-4449-ba8f-e0a20d565829" (UID: "c807689a-74be-4449-ba8f-e0a20d565829"). InnerVolumeSpecName "kube-api-access-mhrk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.383793 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c807689a-74be-4449-ba8f-e0a20d565829-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c807689a-74be-4449-ba8f-e0a20d565829" (UID: "c807689a-74be-4449-ba8f-e0a20d565829"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.386699 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bad4ef-46c5-4932-9490-14c92b2fbdc6-kube-api-access-p2vzz" (OuterVolumeSpecName: "kube-api-access-p2vzz") pod "51bad4ef-46c5-4932-9490-14c92b2fbdc6" (UID: "51bad4ef-46c5-4932-9490-14c92b2fbdc6"). InnerVolumeSpecName "kube-api-access-p2vzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.390643 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bad4ef-46c5-4932-9490-14c92b2fbdc6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51bad4ef-46c5-4932-9490-14c92b2fbdc6" (UID: "51bad4ef-46c5-4932-9490-14c92b2fbdc6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.474901 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrk5\" (UniqueName: \"kubernetes.io/projected/c807689a-74be-4449-ba8f-e0a20d565829-kube-api-access-mhrk5\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.474948 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.474963 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.474974 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51bad4ef-46c5-4932-9490-14c92b2fbdc6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.474986 4793 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c807689a-74be-4449-ba8f-e0a20d565829-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.474997 4793 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c807689a-74be-4449-ba8f-e0a20d565829-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.475007 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2vzz\" (UniqueName: \"kubernetes.io/projected/51bad4ef-46c5-4932-9490-14c92b2fbdc6-kube-api-access-p2vzz\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.475018 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.475027 4793 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51bad4ef-46c5-4932-9490-14c92b2fbdc6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.953170 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9"] Feb 17 20:12:30 crc kubenswrapper[4793]: E0217 20:12:30.954484 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c807689a-74be-4449-ba8f-e0a20d565829" containerName="controller-manager" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.954580 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c807689a-74be-4449-ba8f-e0a20d565829" containerName="controller-manager" Feb 17 20:12:30 crc kubenswrapper[4793]: E0217 20:12:30.954679 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bad4ef-46c5-4932-9490-14c92b2fbdc6" containerName="route-controller-manager" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.954774 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bad4ef-46c5-4932-9490-14c92b2fbdc6" containerName="route-controller-manager" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.954944 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bad4ef-46c5-4932-9490-14c92b2fbdc6" containerName="route-controller-manager" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.955077 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c807689a-74be-4449-ba8f-e0a20d565829" containerName="controller-manager" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.955563 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.956178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6"] Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.956620 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:30 crc kubenswrapper[4793]: I0217 20:12:30.969810 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.008052 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081589 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770df45d-2ea1-495e-b246-b2cdace1689e-serving-cert\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081635 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-client-ca\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081665 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-proxy-ca-bundles\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081776 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-config\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081853 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkl4s\" (UniqueName: \"kubernetes.io/projected/770df45d-2ea1-495e-b246-b2cdace1689e-kube-api-access-bkl4s\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081883 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770df45d-2ea1-495e-b246-b2cdace1689e-config\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081908 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/770df45d-2ea1-495e-b246-b2cdace1689e-client-ca\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081930 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvsz9\" (UniqueName: \"kubernetes.io/projected/65ffb888-7bb2-4a99-80ad-31913070d0fd-kube-api-access-lvsz9\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.081962 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ffb888-7bb2-4a99-80ad-31913070d0fd-serving-cert\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.087367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" event={"ID":"c807689a-74be-4449-ba8f-e0a20d565829","Type":"ContainerDied","Data":"a0ef304fff6e33ef47669cffbcbe2d5e2ad790010e36a97ded5e2b8a2575655f"} Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.087442 4793 scope.go:117] "RemoveContainer" containerID="7c697b9e944417ddea7c768b2e7ffdd6f9fd7d8aadeb6eb492983827bc801bae" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.087590 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-674b7675f4-dvskg" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.090158 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" event={"ID":"51bad4ef-46c5-4932-9490-14c92b2fbdc6","Type":"ContainerDied","Data":"5def0af59f2dd0c32224f8b8ee4a7bccdfd5821ea995e8ec7ca4783fc6339bdd"} Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.090275 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.110671 4793 scope.go:117] "RemoveContainer" containerID="dd50a5568f73fdfd4f508cff0c53b08cc5e9c06a0686a5b57ae503d5c33bd6e9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.119807 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-674b7675f4-dvskg"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.132038 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-674b7675f4-dvskg"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.143001 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.148175 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786b797dc4-tzht5"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkl4s\" (UniqueName: \"kubernetes.io/projected/770df45d-2ea1-495e-b246-b2cdace1689e-kube-api-access-bkl4s\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185178 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770df45d-2ea1-495e-b246-b2cdace1689e-config\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185200 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvsz9\" (UniqueName: \"kubernetes.io/projected/65ffb888-7bb2-4a99-80ad-31913070d0fd-kube-api-access-lvsz9\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185216 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/770df45d-2ea1-495e-b246-b2cdace1689e-client-ca\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ffb888-7bb2-4a99-80ad-31913070d0fd-serving-cert\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770df45d-2ea1-495e-b246-b2cdace1689e-serving-cert\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185288 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-client-ca\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-proxy-ca-bundles\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.185335 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-config\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.186521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770df45d-2ea1-495e-b246-b2cdace1689e-config\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.186649 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-config\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.186649 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-client-ca\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.187320 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/770df45d-2ea1-495e-b246-b2cdace1689e-client-ca\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.187409 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/65ffb888-7bb2-4a99-80ad-31913070d0fd-proxy-ca-bundles\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.189548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770df45d-2ea1-495e-b246-b2cdace1689e-serving-cert\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.195266 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65ffb888-7bb2-4a99-80ad-31913070d0fd-serving-cert\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.210617 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkl4s\" (UniqueName: \"kubernetes.io/projected/770df45d-2ea1-495e-b246-b2cdace1689e-kube-api-access-bkl4s\") pod \"route-controller-manager-7845b8dc6f-7mgh6\" (UID: \"770df45d-2ea1-495e-b246-b2cdace1689e\") " pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.220488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvsz9\" (UniqueName: \"kubernetes.io/projected/65ffb888-7bb2-4a99-80ad-31913070d0fd-kube-api-access-lvsz9\") pod \"controller-manager-85dfbf9f9f-5xcj9\" (UID: \"65ffb888-7bb2-4a99-80ad-31913070d0fd\") " pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.271427 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.278535 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.516282 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9"] Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.549936 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bad4ef-46c5-4932-9490-14c92b2fbdc6" path="/var/lib/kubelet/pods/51bad4ef-46c5-4932-9490-14c92b2fbdc6/volumes" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.551058 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c807689a-74be-4449-ba8f-e0a20d565829" path="/var/lib/kubelet/pods/c807689a-74be-4449-ba8f-e0a20d565829/volumes" Feb 17 20:12:31 crc kubenswrapper[4793]: I0217 20:12:31.790048 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6"] Feb 17 20:12:31 crc kubenswrapper[4793]: W0217 20:12:31.799348 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod770df45d_2ea1_495e_b246_b2cdace1689e.slice/crio-90d09645b0eeff4fb091fd184ff6265edf6c4a1c5e319fe70f0ed382ffdaf592 WatchSource:0}: Error finding container 90d09645b0eeff4fb091fd184ff6265edf6c4a1c5e319fe70f0ed382ffdaf592: Status 404 returned error can't find the container with id 90d09645b0eeff4fb091fd184ff6265edf6c4a1c5e319fe70f0ed382ffdaf592 Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.095507 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" event={"ID":"65ffb888-7bb2-4a99-80ad-31913070d0fd","Type":"ContainerStarted","Data":"48d994605fa046089f1e8f698bac584367ff22ca5a8a3280b2d596bfdebe49f8"} Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.095556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" event={"ID":"65ffb888-7bb2-4a99-80ad-31913070d0fd","Type":"ContainerStarted","Data":"f6ea0b0b598802d0ce3ca52d445557e3102dc0de825b7cf72e0849fc0d004fe0"} Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.095787 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.099538 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" event={"ID":"770df45d-2ea1-495e-b246-b2cdace1689e","Type":"ContainerStarted","Data":"8d8ae90c8401598e7d122b4d87ef184951cae6933b74e253a71c43591452a202"} Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.099581 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" event={"ID":"770df45d-2ea1-495e-b246-b2cdace1689e","Type":"ContainerStarted","Data":"90d09645b0eeff4fb091fd184ff6265edf6c4a1c5e319fe70f0ed382ffdaf592"} Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.099610 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.101496 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.114160 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85dfbf9f9f-5xcj9" podStartSLOduration=3.114144031 podStartE2EDuration="3.114144031s" podCreationTimestamp="2026-02-17 20:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:12:32.113930345 +0000 UTC m=+227.405628656" watchObservedRunningTime="2026-02-17 20:12:32.114144031 +0000 UTC m=+227.405842342" Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.129197 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" podStartSLOduration=3.129180725 podStartE2EDuration="3.129180725s" podCreationTimestamp="2026-02-17 20:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:12:32.127825132 +0000 UTC m=+227.419523443" watchObservedRunningTime="2026-02-17 20:12:32.129180725 +0000 UTC m=+227.420879036" Feb 17 20:12:32 crc kubenswrapper[4793]: I0217 20:12:32.323564 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7845b8dc6f-7mgh6" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.110877 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.112246 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.112380 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.112741 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401" gracePeriod=15 Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.112824 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c" gracePeriod=15 Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.112843 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276" gracePeriod=15 Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.112887 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9" gracePeriod=15 Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.113016 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140" gracePeriod=15 Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114022 4793 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114521 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114539 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114552 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114560 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114572 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114580 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114590 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114597 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114610 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114618 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114629 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114636 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 20:12:41 crc kubenswrapper[4793]: E0217 20:12:41.114649 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114657 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114855 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114871 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114881 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114894 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.114903 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.115177 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221321 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221379 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221425 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221474 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221495 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221529 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221553 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.221582 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322627 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322646 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322702 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322729 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322730 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322781 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322726 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322863 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322864 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.322947 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:41 crc kubenswrapper[4793]: I0217 20:12:41.323009 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.174369 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.176081 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.177008 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c" exitCode=0 Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.177045 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140" exitCode=0 Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.177055 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276" exitCode=0 Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.177064 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9" exitCode=2 Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.177098 4793 scope.go:117] "RemoveContainer" containerID="b6f3722f78f83475af38958c1c6a4b0593a4c51be7d8dcbd0d32d3b068dee61b" Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.183222 4793 generic.go:334] "Generic (PLEG): container finished" podID="6599fec2-8123-49ae-9668-df110bf07b3d" containerID="6d2e61f5b4d6c86e5bcd39c8d37bed1fb2c053a5def1e4f96f55d4fc9cbb9c01" exitCode=0 Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.183301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6599fec2-8123-49ae-9668-df110bf07b3d","Type":"ContainerDied","Data":"6d2e61f5b4d6c86e5bcd39c8d37bed1fb2c053a5def1e4f96f55d4fc9cbb9c01"} Feb 17 20:12:42 crc kubenswrapper[4793]: I0217 20:12:42.184544 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.221430 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.514926 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.515987 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.516813 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.517572 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.624698 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.625270 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.649623 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.649722 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.649794 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.649823 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.649867 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.649952 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.650162 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.650183 4793 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.650195 4793 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.750776 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-var-lock\") pod \"6599fec2-8123-49ae-9668-df110bf07b3d\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.750990 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-kubelet-dir\") pod \"6599fec2-8123-49ae-9668-df110bf07b3d\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.750991 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-var-lock" (OuterVolumeSpecName: "var-lock") pod "6599fec2-8123-49ae-9668-df110bf07b3d" (UID: "6599fec2-8123-49ae-9668-df110bf07b3d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.751067 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599fec2-8123-49ae-9668-df110bf07b3d-kube-api-access\") pod \"6599fec2-8123-49ae-9668-df110bf07b3d\" (UID: \"6599fec2-8123-49ae-9668-df110bf07b3d\") " Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.751118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6599fec2-8123-49ae-9668-df110bf07b3d" (UID: "6599fec2-8123-49ae-9668-df110bf07b3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.751371 4793 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.751397 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6599fec2-8123-49ae-9668-df110bf07b3d-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.759905 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6599fec2-8123-49ae-9668-df110bf07b3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6599fec2-8123-49ae-9668-df110bf07b3d" (UID: "6599fec2-8123-49ae-9668-df110bf07b3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:12:43 crc kubenswrapper[4793]: I0217 20:12:43.852807 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599fec2-8123-49ae-9668-df110bf07b3d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.240186 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.240861 4793 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401" exitCode=0 Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.240947 4793 scope.go:117] "RemoveContainer" containerID="1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.241119 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.242491 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.243109 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.243374 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6599fec2-8123-49ae-9668-df110bf07b3d","Type":"ContainerDied","Data":"96e06cf0d355b29e5f44821a5f1ea102acfbe3b292a588a30aa2b43fa14b9c85"} Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.243408 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e06cf0d355b29e5f44821a5f1ea102acfbe3b292a588a30aa2b43fa14b9c85" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.243425 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.258837 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.259553 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.261984 4793 scope.go:117] "RemoveContainer" containerID="324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.268737 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.269087 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.275881 4793 scope.go:117] "RemoveContainer" containerID="5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.291653 4793 scope.go:117] "RemoveContainer" containerID="9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.307995 4793 scope.go:117] "RemoveContainer" containerID="7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.324427 4793 scope.go:117] "RemoveContainer" containerID="93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.348894 4793 scope.go:117] "RemoveContainer" containerID="1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c" Feb 17 20:12:44 crc kubenswrapper[4793]: E0217 20:12:44.349421 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\": container with ID starting with 1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c not found: ID does not exist" containerID="1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.349456 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c"} err="failed to get container status \"1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\": rpc error: code = NotFound desc = could not find container \"1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c\": container with ID starting with 1bf373ce6c8115951201d04a1da9eea4d06e723bb7838ab1176524b83384189c not found: ID does not exist" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.349483 4793 scope.go:117] "RemoveContainer" containerID="324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140" Feb 17 20:12:44 crc kubenswrapper[4793]: E0217 20:12:44.349943 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\": container with ID starting with 324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140 not found: ID does not exist" containerID="324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.350001 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140"} err="failed to get container status \"324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\": rpc error: code = NotFound desc = could not find container \"324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140\": container with ID starting with 324e652cd7ef9f39b5949b567ac202f98c0d5cceb9940218d12f82320541f140 not found: ID does not exist" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.350028 4793 scope.go:117] "RemoveContainer" containerID="5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276" Feb 17 20:12:44 crc kubenswrapper[4793]: E0217 20:12:44.350401 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\": container with ID starting with 5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276 not found: ID does not exist" containerID="5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.350432 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276"} err="failed to get container status \"5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\": rpc error: code = NotFound desc = could not find container \"5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276\": container with ID starting with 5684d1f220ed985555f431d80cb31b947c35481c7a3b0595aaa06d15a0f97276 not found: ID does not exist" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.350452 4793 scope.go:117] "RemoveContainer" containerID="9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9" Feb 17 20:12:44 crc kubenswrapper[4793]: E0217 20:12:44.350961 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\": container with ID starting with 9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9 not found: ID does not exist" containerID="9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.350993 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9"} err="failed to get container status \"9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\": rpc error: code = NotFound desc = could not find container \"9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9\": container with ID starting with 9a740b007df848bc185f96144f0f88f40ac1fd53e13eada8f17201f553fb0fd9 not found: ID does not exist" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.351014 4793 scope.go:117] "RemoveContainer" containerID="7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401" Feb 17 20:12:44 crc kubenswrapper[4793]: E0217 20:12:44.351241 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\": container with ID starting with 7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401 not found: ID does not exist" containerID="7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.351270 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401"} err="failed to get container status \"7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\": rpc error: code = NotFound desc = could not find container \"7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401\": container with ID starting with 7070768bef4ca935a92d2d2ff4396e648d9a7509138e54d6bb72946056f1c401 not found: ID does not exist" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.351288 4793 scope.go:117] "RemoveContainer" containerID="93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022" Feb 17 20:12:44 crc kubenswrapper[4793]: E0217 20:12:44.351496 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\": container with ID starting with 93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022 not found: ID does not exist" containerID="93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022" Feb 17 20:12:44 crc kubenswrapper[4793]: I0217 20:12:44.351525 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022"} err="failed to get container status \"93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\": rpc error: code = NotFound desc = could not find container \"93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022\": container with ID starting with 93549174a13a5c93a682cb26d1969703c9c7256df1d868a9c98b53d0ac21e022 not found: ID does not exist" Feb 17 20:12:45 crc kubenswrapper[4793]: I0217 20:12:45.540641 4793 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: I0217 20:12:45.540951 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: I0217 20:12:45.546035 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.547434 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.547951 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.548463 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.548864 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.549222 4793 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:45 crc kubenswrapper[4793]: I0217 20:12:45.549262 4793 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.549542 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="200ms" Feb 17 20:12:45 crc kubenswrapper[4793]: E0217 20:12:45.751130 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="400ms" Feb 17 20:12:46 crc kubenswrapper[4793]: E0217 20:12:46.152832 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="800ms" Feb 17 20:12:46 crc kubenswrapper[4793]: E0217 20:12:46.159195 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:46 crc kubenswrapper[4793]: I0217 20:12:46.159964 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:46 crc kubenswrapper[4793]: W0217 20:12:46.203327 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b40a68cacf46129e63420461e6c0930023e39096eb0277d1a42ba244021ae8b7 WatchSource:0}: Error finding container b40a68cacf46129e63420461e6c0930023e39096eb0277d1a42ba244021ae8b7: Status 404 returned error can't find the container with id b40a68cacf46129e63420461e6c0930023e39096eb0277d1a42ba244021ae8b7 Feb 17 20:12:46 crc kubenswrapper[4793]: E0217 20:12:46.217601 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189521c6cf91aefe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 20:12:46.209830654 +0000 UTC m=+241.501528995,LastTimestamp:2026-02-17 20:12:46.209830654 +0000 UTC m=+241.501528995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 20:12:46 crc kubenswrapper[4793]: I0217 20:12:46.257234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b40a68cacf46129e63420461e6c0930023e39096eb0277d1a42ba244021ae8b7"} Feb 17 20:12:46 crc kubenswrapper[4793]: E0217 20:12:46.954092 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="1.6s" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.044141 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:12:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:12:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:12:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T20:12:47Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.044629 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.045019 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.045382 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.045764 4793 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.045792 4793 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 20:12:47 crc kubenswrapper[4793]: I0217 20:12:47.267652 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"843a9a01792b19efc7c590bf1a2f301b1b51cec483dea81ebbe7d8b903a9ad35"} Feb 17 20:12:47 crc kubenswrapper[4793]: I0217 20:12:47.268436 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.268443 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:47 crc kubenswrapper[4793]: E0217 20:12:47.491352 4793 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.155:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189521c6cf91aefe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 20:12:46.209830654 +0000 UTC m=+241.501528995,LastTimestamp:2026-02-17 20:12:46.209830654 +0000 UTC m=+241.501528995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 20:12:48 crc kubenswrapper[4793]: E0217 20:12:48.275020 4793 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:12:48 crc kubenswrapper[4793]: E0217 20:12:48.554716 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="3.2s" Feb 17 20:12:51 crc kubenswrapper[4793]: E0217 20:12:51.755907 4793 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.155:6443: connect: connection refused" interval="6.4s" Feb 17 20:12:53 crc kubenswrapper[4793]: I0217 20:12:53.450179 4793 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:35696->192.168.126.11:10257: read: connection reset by peer" start-of-body= Feb 17 20:12:53 crc kubenswrapper[4793]: I0217 20:12:53.450255 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:35696->192.168.126.11:10257: read: connection reset by peer" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.313662 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.313754 4793 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335" exitCode=1 Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.313790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335"} Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.314338 4793 scope.go:117] "RemoveContainer" containerID="57a18cf839c267df5d9a6d343fbc38dbe195fad6d9fff89663d0958a0492a335" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.314795 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.315290 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.538421 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.539752 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.540302 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.567008 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.567054 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:12:54 crc kubenswrapper[4793]: E0217 20:12:54.567814 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:54 crc kubenswrapper[4793]: I0217 20:12:54.568375 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:54 crc kubenswrapper[4793]: W0217 20:12:54.593657 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8ba0ee3361e395a22d3a1c797f2a3070418c2127e8b9d61940b7ec2189ad5d5c WatchSource:0}: Error finding container 8ba0ee3361e395a22d3a1c797f2a3070418c2127e8b9d61940b7ec2189ad5d5c: Status 404 returned error can't find the container with id 8ba0ee3361e395a22d3a1c797f2a3070418c2127e8b9d61940b7ec2189ad5d5c Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.325717 4793 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6f5ea0819d2c7ef14c649c493929d1ba1e5246e49336fc3f3377109cd05b79f0" exitCode=0 Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.325785 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6f5ea0819d2c7ef14c649c493929d1ba1e5246e49336fc3f3377109cd05b79f0"} Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.325818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ba0ee3361e395a22d3a1c797f2a3070418c2127e8b9d61940b7ec2189ad5d5c"} Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.326092 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.326123 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.326790 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:55 crc kubenswrapper[4793]: E0217 20:12:55.327012 4793 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.327160 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.331634 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.331765 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"96d054ede5153b1afa643738937fb9c5dd015d13950d2f57667c303d1b60ce4f"} Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.332460 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.332724 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.548293 4793 status_manager.go:851] "Failed to get status for pod" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.548748 4793 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:55 crc kubenswrapper[4793]: I0217 20:12:55.549095 4793 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.155:6443: connect: connection refused" Feb 17 20:12:56 crc kubenswrapper[4793]: I0217 20:12:56.342570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"190fb2798cb381ae9da5e318ab5e52fb63dda23bd6e25f2f93ed9b4561eca3a9"} Feb 17 20:12:56 crc kubenswrapper[4793]: I0217 20:12:56.343192 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63192014cb0e8ada484a21b8e54ff9c750048be543cd1994d48f212b250da824"} Feb 17 20:12:56 crc kubenswrapper[4793]: I0217 20:12:56.343293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd4ed3e77b16426080421e72aa31d6660808384e893f17b14def5255ec035e27"} Feb 17 20:12:56 crc kubenswrapper[4793]: I0217 20:12:56.343382 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"238299629c425596751de84bb81526372e7adec914f59b68afbe72769f37c45b"} Feb 17 20:12:57 crc kubenswrapper[4793]: I0217 20:12:57.350225 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"559f6cbe3c8e76ff76fb0283c8f6f28e096bb1d692e87d9a2fafb556c4ff9130"} Feb 17 20:12:57 crc kubenswrapper[4793]: I0217 20:12:57.350374 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:57 crc kubenswrapper[4793]: I0217 20:12:57.350445 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:12:57 crc kubenswrapper[4793]: I0217 20:12:57.350460 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:12:59 crc kubenswrapper[4793]: I0217 20:12:59.569612 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:59 crc kubenswrapper[4793]: I0217 20:12:59.570008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:59 crc kubenswrapper[4793]: I0217 20:12:59.576321 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:12:59 crc kubenswrapper[4793]: I0217 20:12:59.874465 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:12:59 crc kubenswrapper[4793]: I0217 20:12:59.883165 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:13:00 crc kubenswrapper[4793]: I0217 20:13:00.373360 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:13:02 crc kubenswrapper[4793]: I0217 20:13:02.370923 4793 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:13:02 crc kubenswrapper[4793]: I0217 20:13:02.394281 4793 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9208cc24-8e8d-4027-af06-291b3b46ee59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:12:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:12:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T20:12:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://238299629c425596751de84bb81526372e7adec914f59b68afbe72769f37c45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:12:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63192014cb0e8ada484a21b8e54ff9c750048be543cd1994d48f212b250da824\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:12:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ed3e77b16426080421e72aa31d6660808384e893f17b14def5255ec035e27\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:12:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559f6cbe3c8e76ff76fb0283c8f6f28e096bb1d692e87d9a2fafb556c4ff9130\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:12:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://190fb2798cb381ae9da5e318ab5e52fb63dda23bd6e25f2f93ed9b4561eca3a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T20:12:56Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f5ea0819d2c7ef14c649c493929d1ba1e5246e49336fc3f3377109cd05b79f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f5ea0819d2c7ef14c649c493929d1ba1e5246e49336fc3f3377109cd05b79f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T20:12:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T20:12:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"9208cc24-8e8d-4027-af06-291b3b46ee59\": field is immutable" Feb 17 20:13:02 crc kubenswrapper[4793]: I0217 20:13:02.439366 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8adb9b1-f66c-4d56-bc2f-c730150115b6" Feb 17 20:13:03 crc kubenswrapper[4793]: I0217 20:13:03.388853 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:13:03 crc kubenswrapper[4793]: I0217 20:13:03.388906 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:13:03 crc kubenswrapper[4793]: I0217 20:13:03.395216 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8adb9b1-f66c-4d56-bc2f-c730150115b6" Feb 17 20:13:03 crc kubenswrapper[4793]: I0217 20:13:03.396288 4793 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://238299629c425596751de84bb81526372e7adec914f59b68afbe72769f37c45b" Feb 17 20:13:03 crc kubenswrapper[4793]: I0217 20:13:03.396332 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:13:04 crc kubenswrapper[4793]: I0217 20:13:04.395890 4793 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:13:04 crc kubenswrapper[4793]: I0217 20:13:04.396367 4793 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9208cc24-8e8d-4027-af06-291b3b46ee59" Feb 17 20:13:04 crc kubenswrapper[4793]: I0217 20:13:04.401233 4793 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8adb9b1-f66c-4d56-bc2f-c730150115b6" Feb 17 20:13:11 crc kubenswrapper[4793]: I0217 20:13:11.542651 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 20:13:11 crc kubenswrapper[4793]: I0217 20:13:11.934201 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 20:13:11 crc kubenswrapper[4793]: I0217 20:13:11.998085 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 20:13:12 crc kubenswrapper[4793]: I0217 20:13:12.150598 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 20:13:12 crc kubenswrapper[4793]: I0217 20:13:12.225434 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 20:13:12 crc kubenswrapper[4793]: I0217 20:13:12.274911 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 20:13:12 crc kubenswrapper[4793]: I0217 20:13:12.587116 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 20:13:13 crc kubenswrapper[4793]: I0217 20:13:13.348399 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 20:13:13 crc kubenswrapper[4793]: I0217 20:13:13.404994 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 20:13:13 crc kubenswrapper[4793]: I0217 20:13:13.778862 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 20:13:13 crc kubenswrapper[4793]: I0217 20:13:13.900811 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 20:13:13 crc kubenswrapper[4793]: I0217 20:13:13.910021 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.144194 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.152219 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.208510 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.385833 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.396653 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.477987 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.562188 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.579353 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.711725 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.862727 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 20:13:14 crc kubenswrapper[4793]: I0217 20:13:14.902142 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.037304 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.318720 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.364748 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.368734 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.404995 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.405914 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.538135 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.750089 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 20:13:15 crc kubenswrapper[4793]: I0217 20:13:15.918632 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.011904 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.042633 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.048769 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.256476 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.331091 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.529308 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.557812 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.606648 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.643843 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.667485 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.673511 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.712505 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.749999 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.782440 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.784243 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.810911 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.830444 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.836441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.905216 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.947822 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 20:13:16 crc kubenswrapper[4793]: I0217 20:13:16.952033 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.096862 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.123252 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.375446 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.425164 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.434483 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.466989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.687052 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.716038 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.719668 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.763793 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.807646 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.856308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 20:13:17 crc kubenswrapper[4793]: I0217 20:13:17.875302 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.015884 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.020429 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.154581 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.177313 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.197964 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.342474 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.370346 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.427399 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.459050 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.553074 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.568477 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.598939 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.652281 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.699431 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.770614 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.789252 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.814602 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.826834 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 20:13:18 crc kubenswrapper[4793]: I0217 20:13:18.894145 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.004451 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.073038 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.098067 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.188452 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.190094 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.302341 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.390840 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.398760 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.531039 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.601852 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.683337 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.685021 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.698904 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.710895 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.732287 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.775075 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.786913 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.842907 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.875367 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.887846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 20:13:19 crc kubenswrapper[4793]: I0217 20:13:19.931266 4793 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.029433 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.049428 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.173020 4793 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.190650 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.209587 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.220337 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.257002 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.266625 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.304495 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.369480 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.401477 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.435398 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.512052 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.528077 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.586603 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.658626 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.661885 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.668356 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.679192 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.702772 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.795767 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.844145 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.872861 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.874987 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.895754 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 20:13:20 crc kubenswrapper[4793]: I0217 20:13:20.933317 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.034038 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.101942 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.114194 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.145173 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.158001 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.208648 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.264880 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.265986 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.283792 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.308703 4793 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.321245 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.363311 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.411572 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.611301 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.635945 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.723435 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.869023 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.919082 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.923716 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 20:13:21 crc kubenswrapper[4793]: I0217 20:13:21.994781 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.012151 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.035454 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.051863 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.062653 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.068858 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.077357 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.095424 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.130677 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.198829 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.201450 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.318331 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.350020 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.388766 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.396380 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.404104 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.411256 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.560233 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.676533 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.680643 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.768083 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.832771 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.857232 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.938108 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 20:13:22 crc kubenswrapper[4793]: I0217 20:13:22.966677 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.010349 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.177516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.196021 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.317382 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.342830 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.364499 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.432932 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.452449 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.510878 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.644564 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.704540 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.718441 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.763813 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.842375 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 20:13:23 crc kubenswrapper[4793]: I0217 20:13:23.982903 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.006942 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.011624 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.011758 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.255203 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.276762 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.319015 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.358612 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.362506 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.462173 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.475349 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.516656 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.536396 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.593506 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.609579 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.700349 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.715518 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.764009 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.817734 4793 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 20:13:24 crc kubenswrapper[4793]: I0217 20:13:24.959062 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.001352 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.056877 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.224947 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.227097 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.263963 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.372363 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.410213 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.469026 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.518723 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.540785 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.619818 4793 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.822705 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.879109 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 20:13:25 crc kubenswrapper[4793]: I0217 20:13:25.951129 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.011062 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.025321 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.201648 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.221493 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.338402 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.458642 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.529935 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.602479 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.694041 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.905106 4793 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.909963 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.910028 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd764","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 20:13:26 crc kubenswrapper[4793]: E0217 20:13:26.910236 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" containerName="installer" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.910255 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" containerName="installer" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.910377 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6599fec2-8123-49ae-9668-df110bf07b3d" containerName="installer" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.910740 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfdp2","openshift-marketplace/certified-operators-nwsr4","openshift-marketplace/redhat-operators-4x565","openshift-marketplace/marketplace-operator-79b997595-99gxb","openshift-marketplace/redhat-marketplace-8fhps"] Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.910870 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.911210 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dfdp2" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="registry-server" containerID="cri-o://3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b" gracePeriod=30 Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.911641 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8fhps" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="registry-server" containerID="cri-o://312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb" gracePeriod=30 Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.911743 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4x565" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="registry-server" containerID="cri-o://4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" gracePeriod=30 Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.911857 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nwsr4" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="registry-server" containerID="cri-o://c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd" gracePeriod=30 Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.911916 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" containerID="cri-o://a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280" gracePeriod=30 Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.917870 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 20:13:26 crc kubenswrapper[4793]: I0217 20:13:26.982738 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.982712828 podStartE2EDuration="24.982712828s" podCreationTimestamp="2026-02-17 20:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:13:26.979550535 +0000 UTC m=+282.271248856" watchObservedRunningTime="2026-02-17 20:13:26.982712828 +0000 UTC m=+282.274411159" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.035896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjk5j\" (UniqueName: \"kubernetes.io/projected/9e76cf5c-f36b-4e98-8e69-f40f066ba874-kube-api-access-qjk5j\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.036614 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e76cf5c-f36b-4e98-8e69-f40f066ba874-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.036708 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9e76cf5c-f36b-4e98-8e69-f40f066ba874-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.046195 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06 is running failed: container process not found" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.047217 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06 is running failed: container process not found" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.047614 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06 is running failed: container process not found" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.047769 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4x565" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="registry-server" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.122527 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.138431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjk5j\" (UniqueName: \"kubernetes.io/projected/9e76cf5c-f36b-4e98-8e69-f40f066ba874-kube-api-access-qjk5j\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.138510 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e76cf5c-f36b-4e98-8e69-f40f066ba874-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.138544 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9e76cf5c-f36b-4e98-8e69-f40f066ba874-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.140637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e76cf5c-f36b-4e98-8e69-f40f066ba874-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.146551 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9e76cf5c-f36b-4e98-8e69-f40f066ba874-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.162417 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjk5j\" (UniqueName: \"kubernetes.io/projected/9e76cf5c-f36b-4e98-8e69-f40f066ba874-kube-api-access-qjk5j\") pod \"marketplace-operator-79b997595-dd764\" (UID: \"9e76cf5c-f36b-4e98-8e69-f40f066ba874\") " pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.239447 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.242229 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.266619 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.314020 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.314920 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.332732 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.374029 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.391676 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.443264 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44v4c\" (UniqueName: \"kubernetes.io/projected/7b3a754f-730e-4e58-a6d0-fac36125b7f2-kube-api-access-44v4c\") pod \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.443310 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-catalog-content\") pod \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.443341 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-utilities\") pod \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\" (UID: \"7b3a754f-730e-4e58-a6d0-fac36125b7f2\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.444642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-utilities" (OuterVolumeSpecName: "utilities") pod "7b3a754f-730e-4e58-a6d0-fac36125b7f2" (UID: "7b3a754f-730e-4e58-a6d0-fac36125b7f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.448303 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3a754f-730e-4e58-a6d0-fac36125b7f2-kube-api-access-44v4c" (OuterVolumeSpecName: "kube-api-access-44v4c") pod "7b3a754f-730e-4e58-a6d0-fac36125b7f2" (UID: "7b3a754f-730e-4e58-a6d0-fac36125b7f2"). InnerVolumeSpecName "kube-api-access-44v4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.472376 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b3a754f-730e-4e58-a6d0-fac36125b7f2" (UID: "7b3a754f-730e-4e58-a6d0-fac36125b7f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.482551 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.490620 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.493032 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.494351 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.496002 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544120 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-kube-api-access-dcv7z\") pod \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szq5j\" (UniqueName: \"kubernetes.io/projected/19a7ae66-8a05-4413-8085-8455f146e98c-kube-api-access-szq5j\") pod \"19a7ae66-8a05-4413-8085-8455f146e98c\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544190 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-operator-metrics\") pod \"83495f96-c3b0-4871-876c-07832519e1d8\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544242 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-catalog-content\") pod \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544278 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrq7\" (UniqueName: \"kubernetes.io/projected/83495f96-c3b0-4871-876c-07832519e1d8-kube-api-access-nkrq7\") pod \"83495f96-c3b0-4871-876c-07832519e1d8\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544316 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-utilities\") pod \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\" (UID: \"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544341 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxf4r\" (UniqueName: \"kubernetes.io/projected/6aafddcd-6479-40b9-95a0-fa07713b5068-kube-api-access-gxf4r\") pod \"6aafddcd-6479-40b9-95a0-fa07713b5068\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544367 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-utilities\") pod \"19a7ae66-8a05-4413-8085-8455f146e98c\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544395 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-catalog-content\") pod \"6aafddcd-6479-40b9-95a0-fa07713b5068\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544427 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-catalog-content\") pod \"19a7ae66-8a05-4413-8085-8455f146e98c\" (UID: \"19a7ae66-8a05-4413-8085-8455f146e98c\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544457 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-trusted-ca\") pod \"83495f96-c3b0-4871-876c-07832519e1d8\" (UID: \"83495f96-c3b0-4871-876c-07832519e1d8\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-utilities\") pod \"6aafddcd-6479-40b9-95a0-fa07713b5068\" (UID: \"6aafddcd-6479-40b9-95a0-fa07713b5068\") " Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544728 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44v4c\" (UniqueName: \"kubernetes.io/projected/7b3a754f-730e-4e58-a6d0-fac36125b7f2-kube-api-access-44v4c\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544750 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.544762 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b3a754f-730e-4e58-a6d0-fac36125b7f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.545842 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-utilities" (OuterVolumeSpecName: "utilities") pod "b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" (UID: "b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.546509 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-utilities" (OuterVolumeSpecName: "utilities") pod "19a7ae66-8a05-4413-8085-8455f146e98c" (UID: "19a7ae66-8a05-4413-8085-8455f146e98c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.546615 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "83495f96-c3b0-4871-876c-07832519e1d8" (UID: "83495f96-c3b0-4871-876c-07832519e1d8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.547631 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-utilities" (OuterVolumeSpecName: "utilities") pod "6aafddcd-6479-40b9-95a0-fa07713b5068" (UID: "6aafddcd-6479-40b9-95a0-fa07713b5068"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.551070 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "83495f96-c3b0-4871-876c-07832519e1d8" (UID: "83495f96-c3b0-4871-876c-07832519e1d8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.559398 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a7ae66-8a05-4413-8085-8455f146e98c-kube-api-access-szq5j" (OuterVolumeSpecName: "kube-api-access-szq5j") pod "19a7ae66-8a05-4413-8085-8455f146e98c" (UID: "19a7ae66-8a05-4413-8085-8455f146e98c"). InnerVolumeSpecName "kube-api-access-szq5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.559518 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-kube-api-access-dcv7z" (OuterVolumeSpecName: "kube-api-access-dcv7z") pod "b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" (UID: "b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019"). InnerVolumeSpecName "kube-api-access-dcv7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.560484 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83495f96-c3b0-4871-876c-07832519e1d8-kube-api-access-nkrq7" (OuterVolumeSpecName: "kube-api-access-nkrq7") pod "83495f96-c3b0-4871-876c-07832519e1d8" (UID: "83495f96-c3b0-4871-876c-07832519e1d8"). InnerVolumeSpecName "kube-api-access-nkrq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.561753 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aafddcd-6479-40b9-95a0-fa07713b5068-kube-api-access-gxf4r" (OuterVolumeSpecName: "kube-api-access-gxf4r") pod "6aafddcd-6479-40b9-95a0-fa07713b5068" (UID: "6aafddcd-6479-40b9-95a0-fa07713b5068"). InnerVolumeSpecName "kube-api-access-gxf4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.562163 4793 generic.go:334] "Generic (PLEG): container finished" podID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerID="312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb" exitCode=0 Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.562289 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fhps" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.563752 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fhps" event={"ID":"7b3a754f-730e-4e58-a6d0-fac36125b7f2","Type":"ContainerDied","Data":"312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.564019 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fhps" event={"ID":"7b3a754f-730e-4e58-a6d0-fac36125b7f2","Type":"ContainerDied","Data":"5d554ef1e45dd078fc98ecbf789c6892ad08fde6e1d2bbd6938630f63ee8d5c7"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.564167 4793 scope.go:117] "RemoveContainer" containerID="312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.568409 4793 generic.go:334] "Generic (PLEG): container finished" podID="83495f96-c3b0-4871-876c-07832519e1d8" containerID="a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280" exitCode=0 Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.568483 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" event={"ID":"83495f96-c3b0-4871-876c-07832519e1d8","Type":"ContainerDied","Data":"a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.568511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" event={"ID":"83495f96-c3b0-4871-876c-07832519e1d8","Type":"ContainerDied","Data":"101d5b62c4a322c7fcd74022427e3ea204614680f9fde746530c578ebf07ee24"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.569114 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99gxb" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.572202 4793 generic.go:334] "Generic (PLEG): container finished" podID="19a7ae66-8a05-4413-8085-8455f146e98c" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" exitCode=0 Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.572467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerDied","Data":"4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.572631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x565" event={"ID":"19a7ae66-8a05-4413-8085-8455f146e98c","Type":"ContainerDied","Data":"1cde82889c8aa70d6bed748c312fc57e9cb2f18f83560014e3ba80a0b3dd1aff"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.572866 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x565" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.580285 4793 generic.go:334] "Generic (PLEG): container finished" podID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerID="c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd" exitCode=0 Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.580389 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerDied","Data":"c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.581147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nwsr4" event={"ID":"b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019","Type":"ContainerDied","Data":"0dd7e7d53db5828d546de29761bf1643868a9d97ed30155227613e2fc03eb779"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.580459 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nwsr4" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.582710 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.583224 4793 generic.go:334] "Generic (PLEG): container finished" podID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerID="3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b" exitCode=0 Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.584014 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfdp2" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.584143 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfdp2" event={"ID":"6aafddcd-6479-40b9-95a0-fa07713b5068","Type":"ContainerDied","Data":"3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.584166 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfdp2" event={"ID":"6aafddcd-6479-40b9-95a0-fa07713b5068","Type":"ContainerDied","Data":"2e605f9d39725864f89910a07baaf388dd83bfa0450b93377d9c4aaba3f7ee97"} Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.586994 4793 scope.go:117] "RemoveContainer" containerID="ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.593922 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fhps"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.601104 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fhps"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.604504 4793 scope.go:117] "RemoveContainer" containerID="2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.605531 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99gxb"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.610497 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99gxb"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.615954 4793 scope.go:117] "RemoveContainer" containerID="312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.616416 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb\": container with ID starting with 312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb not found: ID does not exist" containerID="312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.616472 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb"} err="failed to get container status \"312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb\": rpc error: code = NotFound desc = could not find container \"312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb\": container with ID starting with 312961e448761bf42929ae855312baf8554d75c0849c822720ba417a59b652fb not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.616507 4793 scope.go:117] "RemoveContainer" containerID="ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.617080 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" (UID: "b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.617154 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9\": container with ID starting with ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9 not found: ID does not exist" containerID="ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.617191 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9"} err="failed to get container status \"ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9\": rpc error: code = NotFound desc = could not find container \"ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9\": container with ID starting with ca32e3d30e0a18086c05d52846cde711d763d68f1f8d29a8f2bcce3677aeb8b9 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.617218 4793 scope.go:117] "RemoveContainer" containerID="2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.617528 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76\": container with ID starting with 2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76 not found: ID does not exist" containerID="2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.617557 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76"} err="failed to get container status \"2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76\": rpc error: code = NotFound desc = could not find container \"2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76\": container with ID starting with 2f696391f11f8b7ede78321554662b55a76ce7e9c71927d7a82c0ccaf4e32b76 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.617586 4793 scope.go:117] "RemoveContainer" containerID="a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.634982 4793 scope.go:117] "RemoveContainer" containerID="a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.635399 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280\": container with ID starting with a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280 not found: ID does not exist" containerID="a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.635485 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280"} err="failed to get container status \"a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280\": rpc error: code = NotFound desc = could not find container \"a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280\": container with ID starting with a2fa9accca4cd0c455ad6dac3afe768ca61ca2dfd3f9b91aa601ebed84119280 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.635569 4793 scope.go:117] "RemoveContainer" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.638056 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aafddcd-6479-40b9-95a0-fa07713b5068" (UID: "6aafddcd-6479-40b9-95a0-fa07713b5068"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645604 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645642 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645656 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aafddcd-6479-40b9-95a0-fa07713b5068-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645676 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcv7z\" (UniqueName: \"kubernetes.io/projected/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-kube-api-access-dcv7z\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645716 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szq5j\" (UniqueName: \"kubernetes.io/projected/19a7ae66-8a05-4413-8085-8455f146e98c-kube-api-access-szq5j\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645733 4793 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/83495f96-c3b0-4871-876c-07832519e1d8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645749 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645764 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrq7\" (UniqueName: \"kubernetes.io/projected/83495f96-c3b0-4871-876c-07832519e1d8-kube-api-access-nkrq7\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645778 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645792 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxf4r\" (UniqueName: \"kubernetes.io/projected/6aafddcd-6479-40b9-95a0-fa07713b5068-kube-api-access-gxf4r\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.645806 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.648709 4793 scope.go:117] "RemoveContainer" containerID="0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.666439 4793 scope.go:117] "RemoveContainer" containerID="40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.679327 4793 scope.go:117] "RemoveContainer" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.679942 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06\": container with ID starting with 4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06 not found: ID does not exist" containerID="4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.679976 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06"} err="failed to get container status \"4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06\": rpc error: code = NotFound desc = could not find container \"4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06\": container with ID starting with 4f5c578b0068b9613e0205b28693d728acb34ab224e175fb4d3511d506fb3d06 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.680003 4793 scope.go:117] "RemoveContainer" containerID="0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.680279 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8\": container with ID starting with 0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8 not found: ID does not exist" containerID="0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.680322 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8"} err="failed to get container status \"0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8\": rpc error: code = NotFound desc = could not find container \"0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8\": container with ID starting with 0b4799b0db08fa6a42936320c9644050fec7f4afc829198648111bb9356b3ee8 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.680351 4793 scope.go:117] "RemoveContainer" containerID="40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.680623 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802\": container with ID starting with 40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802 not found: ID does not exist" containerID="40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.680653 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802"} err="failed to get container status \"40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802\": rpc error: code = NotFound desc = could not find container \"40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802\": container with ID starting with 40b8446060af37d4215966e0d25bf5fc894294e24cd7f12e15f00df8775d5802 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.680673 4793 scope.go:117] "RemoveContainer" containerID="c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.683134 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dd764"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.697913 4793 scope.go:117] "RemoveContainer" containerID="f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.710443 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19a7ae66-8a05-4413-8085-8455f146e98c" (UID: "19a7ae66-8a05-4413-8085-8455f146e98c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.717099 4793 scope.go:117] "RemoveContainer" containerID="95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.730495 4793 scope.go:117] "RemoveContainer" containerID="c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.730849 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd\": container with ID starting with c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd not found: ID does not exist" containerID="c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.730885 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd"} err="failed to get container status \"c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd\": rpc error: code = NotFound desc = could not find container \"c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd\": container with ID starting with c51acebcc4583310462d122fa5134a3d22e1cb9d5b17646ae26b20948a6e99cd not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.730906 4793 scope.go:117] "RemoveContainer" containerID="f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.731135 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909\": container with ID starting with f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909 not found: ID does not exist" containerID="f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.731172 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909"} err="failed to get container status \"f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909\": rpc error: code = NotFound desc = could not find container \"f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909\": container with ID starting with f5afc0c5854a5394e1d8ed6f038f9f009fc22e211c5e03eaa81aa1d29d203909 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.731200 4793 scope.go:117] "RemoveContainer" containerID="95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.731413 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582\": container with ID starting with 95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582 not found: ID does not exist" containerID="95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.731440 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582"} err="failed to get container status \"95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582\": rpc error: code = NotFound desc = could not find container \"95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582\": container with ID starting with 95624e64e3c2e2ecdeeca44476db690e798d981c6d13eb60f957553355c0b582 not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.731455 4793 scope.go:117] "RemoveContainer" containerID="3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.746509 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a7ae66-8a05-4413-8085-8455f146e98c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.752560 4793 scope.go:117] "RemoveContainer" containerID="a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.765666 4793 scope.go:117] "RemoveContainer" containerID="487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.778722 4793 scope.go:117] "RemoveContainer" containerID="3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.779222 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b\": container with ID starting with 3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b not found: ID does not exist" containerID="3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.779247 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b"} err="failed to get container status \"3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b\": rpc error: code = NotFound desc = could not find container \"3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b\": container with ID starting with 3ffaa3ed36725b199e92bb3d54d9c0be5cd7f81dbe087d18027e9df214e4b48b not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.779267 4793 scope.go:117] "RemoveContainer" containerID="a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.779528 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a\": container with ID starting with a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a not found: ID does not exist" containerID="a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.779590 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a"} err="failed to get container status \"a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a\": rpc error: code = NotFound desc = could not find container \"a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a\": container with ID starting with a9f32bdf2ded29735bfc40310d86c966b163e0681a2eb5a3929245886abba61a not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.779624 4793 scope.go:117] "RemoveContainer" containerID="487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b" Feb 17 20:13:27 crc kubenswrapper[4793]: E0217 20:13:27.779986 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b\": container with ID starting with 487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b not found: ID does not exist" containerID="487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.780008 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b"} err="failed to get container status \"487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b\": rpc error: code = NotFound desc = could not find container \"487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b\": container with ID starting with 487da2a6f5d123218e4d22f7dbe00eb79bf2002ef418b909216a21a28250344b not found: ID does not exist" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.868879 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.908913 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4x565"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.913523 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4x565"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.925360 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nwsr4"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.934778 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nwsr4"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.941140 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfdp2"] Feb 17 20:13:27 crc kubenswrapper[4793]: I0217 20:13:27.946832 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dfdp2"] Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.058894 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.153359 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.175973 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.253712 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.592246 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" event={"ID":"9e76cf5c-f36b-4e98-8e69-f40f066ba874","Type":"ContainerStarted","Data":"bff3016a8acba896884714c4b0f468d95a180fe3a55c73a6c8384c79f9a13aa3"} Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.592313 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" event={"ID":"9e76cf5c-f36b-4e98-8e69-f40f066ba874","Type":"ContainerStarted","Data":"7c8aeebe8ad53db54e594fa5e99066695250aa836504884e41bbeadcc5b5a8a3"} Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.592344 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.596389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.612324 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dd764" podStartSLOduration=4.612300985 podStartE2EDuration="4.612300985s" podCreationTimestamp="2026-02-17 20:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:13:28.606980296 +0000 UTC m=+283.898678677" watchObservedRunningTime="2026-02-17 20:13:28.612300985 +0000 UTC m=+283.903999296" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.829958 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.868142 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 20:13:28 crc kubenswrapper[4793]: I0217 20:13:28.947566 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 20:13:29 crc kubenswrapper[4793]: I0217 20:13:29.545334 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" path="/var/lib/kubelet/pods/19a7ae66-8a05-4413-8085-8455f146e98c/volumes" Feb 17 20:13:29 crc kubenswrapper[4793]: I0217 20:13:29.547115 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" path="/var/lib/kubelet/pods/6aafddcd-6479-40b9-95a0-fa07713b5068/volumes" Feb 17 20:13:29 crc kubenswrapper[4793]: I0217 20:13:29.548423 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" path="/var/lib/kubelet/pods/7b3a754f-730e-4e58-a6d0-fac36125b7f2/volumes" Feb 17 20:13:29 crc kubenswrapper[4793]: I0217 20:13:29.550590 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83495f96-c3b0-4871-876c-07832519e1d8" path="/var/lib/kubelet/pods/83495f96-c3b0-4871-876c-07832519e1d8/volumes" Feb 17 20:13:29 crc kubenswrapper[4793]: I0217 20:13:29.551543 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" path="/var/lib/kubelet/pods/b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019/volumes" Feb 17 20:13:29 crc kubenswrapper[4793]: I0217 20:13:29.792519 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 20:13:30 crc kubenswrapper[4793]: I0217 20:13:30.674757 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 20:13:36 crc kubenswrapper[4793]: I0217 20:13:36.193330 4793 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 20:13:36 crc kubenswrapper[4793]: I0217 20:13:36.194018 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://843a9a01792b19efc7c590bf1a2f301b1b51cec483dea81ebbe7d8b903a9ad35" gracePeriod=5 Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.671196 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.671760 4793 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="843a9a01792b19efc7c590bf1a2f301b1b51cec483dea81ebbe7d8b903a9ad35" exitCode=137 Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.783492 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.783590 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.813838 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.813904 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.813951 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.813997 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.813989 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.813989 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814045 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814082 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814138 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814469 4793 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814489 4793 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814507 4793 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.814523 4793 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.821790 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:13:41 crc kubenswrapper[4793]: I0217 20:13:41.915259 4793 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 20:13:42 crc kubenswrapper[4793]: I0217 20:13:42.677751 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 20:13:42 crc kubenswrapper[4793]: I0217 20:13:42.678037 4793 scope.go:117] "RemoveContainer" containerID="843a9a01792b19efc7c590bf1a2f301b1b51cec483dea81ebbe7d8b903a9ad35" Feb 17 20:13:42 crc kubenswrapper[4793]: I0217 20:13:42.678163 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 20:13:43 crc kubenswrapper[4793]: I0217 20:13:43.549762 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 20:13:45 crc kubenswrapper[4793]: I0217 20:13:45.286303 4793 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.903957 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-thhdr"] Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904773 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904790 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904803 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904811 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904823 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904831 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904842 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904850 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904860 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904867 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904881 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904891 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904908 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904918 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904930 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904939 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904949 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904957 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904968 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904975 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.904989 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.904998 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="extract-utilities" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.905009 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905017 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.905033 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905043 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: E0217 20:14:04.905054 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905062 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="extract-content" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905165 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="83495f96-c3b0-4871-876c-07832519e1d8" containerName="marketplace-operator" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905176 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3a754f-730e-4e58-a6d0-fac36125b7f2" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905189 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aafddcd-6479-40b9-95a0-fa07713b5068" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905199 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bb9d71-ebdf-4df2-a5c9-6e4389cd7019" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905211 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a7ae66-8a05-4413-8085-8455f146e98c" containerName="registry-server" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.905224 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.906116 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.908263 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.911372 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thhdr"] Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.996588 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898lk\" (UniqueName: \"kubernetes.io/projected/f5f7099c-f255-4171-b71b-9ad5864a4230-kube-api-access-898lk\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.996701 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5f7099c-f255-4171-b71b-9ad5864a4230-catalog-content\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:04 crc kubenswrapper[4793]: I0217 20:14:04.996735 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5f7099c-f255-4171-b71b-9ad5864a4230-utilities\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.098304 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898lk\" (UniqueName: \"kubernetes.io/projected/f5f7099c-f255-4171-b71b-9ad5864a4230-kube-api-access-898lk\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.098543 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flnm7"] Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.098664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5f7099c-f255-4171-b71b-9ad5864a4230-catalog-content\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.098785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5f7099c-f255-4171-b71b-9ad5864a4230-utilities\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.099110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5f7099c-f255-4171-b71b-9ad5864a4230-catalog-content\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.099190 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5f7099c-f255-4171-b71b-9ad5864a4230-utilities\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.099422 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.101571 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.120460 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flnm7"] Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.139169 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898lk\" (UniqueName: \"kubernetes.io/projected/f5f7099c-f255-4171-b71b-9ad5864a4230-kube-api-access-898lk\") pod \"redhat-marketplace-thhdr\" (UID: \"f5f7099c-f255-4171-b71b-9ad5864a4230\") " pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.200283 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a296a-5abb-4392-b18b-b63e0a67a8c7-utilities\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.200392 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a296a-5abb-4392-b18b-b63e0a67a8c7-catalog-content\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.200439 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlnz5\" (UniqueName: \"kubernetes.io/projected/c15a296a-5abb-4392-b18b-b63e0a67a8c7-kube-api-access-nlnz5\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.223112 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.301168 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a296a-5abb-4392-b18b-b63e0a67a8c7-utilities\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.301205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a296a-5abb-4392-b18b-b63e0a67a8c7-catalog-content\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.301241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlnz5\" (UniqueName: \"kubernetes.io/projected/c15a296a-5abb-4392-b18b-b63e0a67a8c7-kube-api-access-nlnz5\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.302451 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15a296a-5abb-4392-b18b-b63e0a67a8c7-utilities\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.302550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15a296a-5abb-4392-b18b-b63e0a67a8c7-catalog-content\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.334717 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlnz5\" (UniqueName: \"kubernetes.io/projected/c15a296a-5abb-4392-b18b-b63e0a67a8c7-kube-api-access-nlnz5\") pod \"redhat-operators-flnm7\" (UID: \"c15a296a-5abb-4392-b18b-b63e0a67a8c7\") " pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.427934 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.629907 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-thhdr"] Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.803523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thhdr" event={"ID":"f5f7099c-f255-4171-b71b-9ad5864a4230","Type":"ContainerStarted","Data":"d3e5985ec4ee8082cc5892ae92f5034a33ff23dde4e8219c2e92d14aacc13f4f"} Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.803847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thhdr" event={"ID":"f5f7099c-f255-4171-b71b-9ad5864a4230","Type":"ContainerStarted","Data":"5d3b9f2da97ea731f703822deb0a7d934ddb6d193f8b5362d52ef96539eec10b"} Feb 17 20:14:05 crc kubenswrapper[4793]: I0217 20:14:05.820537 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flnm7"] Feb 17 20:14:05 crc kubenswrapper[4793]: W0217 20:14:05.858665 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc15a296a_5abb_4392_b18b_b63e0a67a8c7.slice/crio-1a8c214704b9adeb57c876c0751ea30a186b10402b74e4d4030bf62fc9ddebc9 WatchSource:0}: Error finding container 1a8c214704b9adeb57c876c0751ea30a186b10402b74e4d4030bf62fc9ddebc9: Status 404 returned error can't find the container with id 1a8c214704b9adeb57c876c0751ea30a186b10402b74e4d4030bf62fc9ddebc9 Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.814118 4793 generic.go:334] "Generic (PLEG): container finished" podID="f5f7099c-f255-4171-b71b-9ad5864a4230" containerID="d3e5985ec4ee8082cc5892ae92f5034a33ff23dde4e8219c2e92d14aacc13f4f" exitCode=0 Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.814179 4793 generic.go:334] "Generic (PLEG): container finished" podID="f5f7099c-f255-4171-b71b-9ad5864a4230" containerID="dbf0b551e4f7298c058c9fa65fa7ed3117b6165c2c45b95b4ddd10b3c6c598ca" exitCode=0 Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.815661 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thhdr" event={"ID":"f5f7099c-f255-4171-b71b-9ad5864a4230","Type":"ContainerDied","Data":"d3e5985ec4ee8082cc5892ae92f5034a33ff23dde4e8219c2e92d14aacc13f4f"} Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.815749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thhdr" event={"ID":"f5f7099c-f255-4171-b71b-9ad5864a4230","Type":"ContainerDied","Data":"dbf0b551e4f7298c058c9fa65fa7ed3117b6165c2c45b95b4ddd10b3c6c598ca"} Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.819091 4793 generic.go:334] "Generic (PLEG): container finished" podID="c15a296a-5abb-4392-b18b-b63e0a67a8c7" containerID="cea87eb1bc24391973ed44990838864f40e0c9689fbd6b4df7f7db8168ddfce2" exitCode=0 Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.819166 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flnm7" event={"ID":"c15a296a-5abb-4392-b18b-b63e0a67a8c7","Type":"ContainerDied","Data":"cea87eb1bc24391973ed44990838864f40e0c9689fbd6b4df7f7db8168ddfce2"} Feb 17 20:14:06 crc kubenswrapper[4793]: I0217 20:14:06.819213 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flnm7" event={"ID":"c15a296a-5abb-4392-b18b-b63e0a67a8c7","Type":"ContainerStarted","Data":"1a8c214704b9adeb57c876c0751ea30a186b10402b74e4d4030bf62fc9ddebc9"} Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.306664 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8qhhj"] Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.308091 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.316556 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.330479 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26n4c\" (UniqueName: \"kubernetes.io/projected/d4103f9c-b66f-4042-aa7f-044dc702eb2b-kube-api-access-26n4c\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.330576 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-catalog-content\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.330633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-utilities\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.333388 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qhhj"] Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.431422 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-catalog-content\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.431471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-utilities\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.431572 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26n4c\" (UniqueName: \"kubernetes.io/projected/d4103f9c-b66f-4042-aa7f-044dc702eb2b-kube-api-access-26n4c\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.431961 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-catalog-content\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.435342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-utilities\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.453548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26n4c\" (UniqueName: \"kubernetes.io/projected/d4103f9c-b66f-4042-aa7f-044dc702eb2b-kube-api-access-26n4c\") pod \"certified-operators-8qhhj\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.499665 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zcw6z"] Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.500561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.502273 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.517650 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zcw6z"] Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.532123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2px5\" (UniqueName: \"kubernetes.io/projected/47b71db7-a3b0-4b59-97ed-5d99367b7241-kube-api-access-r2px5\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.532163 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b71db7-a3b0-4b59-97ed-5d99367b7241-utilities\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.532249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b71db7-a3b0-4b59-97ed-5d99367b7241-catalog-content\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.633514 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b71db7-a3b0-4b59-97ed-5d99367b7241-catalog-content\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.633595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2px5\" (UniqueName: \"kubernetes.io/projected/47b71db7-a3b0-4b59-97ed-5d99367b7241-kube-api-access-r2px5\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.633624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b71db7-a3b0-4b59-97ed-5d99367b7241-utilities\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.634467 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b71db7-a3b0-4b59-97ed-5d99367b7241-catalog-content\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.634951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b71db7-a3b0-4b59-97ed-5d99367b7241-utilities\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.651638 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.652774 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2px5\" (UniqueName: \"kubernetes.io/projected/47b71db7-a3b0-4b59-97ed-5d99367b7241-kube-api-access-r2px5\") pod \"community-operators-zcw6z\" (UID: \"47b71db7-a3b0-4b59-97ed-5d99367b7241\") " pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.820720 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.825461 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-thhdr" event={"ID":"f5f7099c-f255-4171-b71b-9ad5864a4230","Type":"ContainerStarted","Data":"a5cc8e18e2af2f7810991bb72782d6044a85af5eddc2f3a3e25e48b1a1cbc673"} Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.827377 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flnm7" event={"ID":"c15a296a-5abb-4392-b18b-b63e0a67a8c7","Type":"ContainerStarted","Data":"df2da086bec8e46a8bef5397c2ddf517d78ec08174917108daa11831b3256efd"} Feb 17 20:14:07 crc kubenswrapper[4793]: I0217 20:14:07.853222 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-thhdr" podStartSLOduration=2.440952771 podStartE2EDuration="3.853207113s" podCreationTimestamp="2026-02-17 20:14:04 +0000 UTC" firstStartedPulling="2026-02-17 20:14:05.805328993 +0000 UTC m=+321.097027334" lastFinishedPulling="2026-02-17 20:14:07.217583375 +0000 UTC m=+322.509281676" observedRunningTime="2026-02-17 20:14:07.850182914 +0000 UTC m=+323.141881235" watchObservedRunningTime="2026-02-17 20:14:07.853207113 +0000 UTC m=+323.144905444" Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.057812 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zcw6z"] Feb 17 20:14:08 crc kubenswrapper[4793]: W0217 20:14:08.063739 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b71db7_a3b0_4b59_97ed_5d99367b7241.slice/crio-1cd4074d26b97805ca7b77c96731eb7ba6a563fb10d53a69add65502260f6c70 WatchSource:0}: Error finding container 1cd4074d26b97805ca7b77c96731eb7ba6a563fb10d53a69add65502260f6c70: Status 404 returned error can't find the container with id 1cd4074d26b97805ca7b77c96731eb7ba6a563fb10d53a69add65502260f6c70 Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.095044 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8qhhj"] Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.833317 4793 generic.go:334] "Generic (PLEG): container finished" podID="47b71db7-a3b0-4b59-97ed-5d99367b7241" containerID="4778adaf06642cc3e6365c531210c15ede4d6be8aa029fc3f36c75967d85c96f" exitCode=0 Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.833384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcw6z" event={"ID":"47b71db7-a3b0-4b59-97ed-5d99367b7241","Type":"ContainerDied","Data":"4778adaf06642cc3e6365c531210c15ede4d6be8aa029fc3f36c75967d85c96f"} Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.833411 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcw6z" event={"ID":"47b71db7-a3b0-4b59-97ed-5d99367b7241","Type":"ContainerStarted","Data":"1cd4074d26b97805ca7b77c96731eb7ba6a563fb10d53a69add65502260f6c70"} Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.835124 4793 generic.go:334] "Generic (PLEG): container finished" podID="c15a296a-5abb-4392-b18b-b63e0a67a8c7" containerID="df2da086bec8e46a8bef5397c2ddf517d78ec08174917108daa11831b3256efd" exitCode=0 Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.835220 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flnm7" event={"ID":"c15a296a-5abb-4392-b18b-b63e0a67a8c7","Type":"ContainerDied","Data":"df2da086bec8e46a8bef5397c2ddf517d78ec08174917108daa11831b3256efd"} Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.837819 4793 generic.go:334] "Generic (PLEG): container finished" podID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerID="51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3" exitCode=0 Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.837889 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qhhj" event={"ID":"d4103f9c-b66f-4042-aa7f-044dc702eb2b","Type":"ContainerDied","Data":"51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3"} Feb 17 20:14:08 crc kubenswrapper[4793]: I0217 20:14:08.837919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qhhj" event={"ID":"d4103f9c-b66f-4042-aa7f-044dc702eb2b","Type":"ContainerStarted","Data":"bad4971bcef7a907156fb6d9018cb7e2721ef04f8250e806473c3e942d54e285"} Feb 17 20:14:09 crc kubenswrapper[4793]: I0217 20:14:09.845729 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcw6z" event={"ID":"47b71db7-a3b0-4b59-97ed-5d99367b7241","Type":"ContainerStarted","Data":"6476ac171f8f21ad85b8f1d54bf90ff3afc5e561e66bdda0415b89444a1ad8fa"} Feb 17 20:14:09 crc kubenswrapper[4793]: I0217 20:14:09.848958 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flnm7" event={"ID":"c15a296a-5abb-4392-b18b-b63e0a67a8c7","Type":"ContainerStarted","Data":"1a837f2a25fa6d2f6bcb4f241c05182f7bdb3ca94949b47dbf57fb324428a9ba"} Feb 17 20:14:09 crc kubenswrapper[4793]: I0217 20:14:09.850328 4793 generic.go:334] "Generic (PLEG): container finished" podID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerID="74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9" exitCode=0 Feb 17 20:14:09 crc kubenswrapper[4793]: I0217 20:14:09.850360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qhhj" event={"ID":"d4103f9c-b66f-4042-aa7f-044dc702eb2b","Type":"ContainerDied","Data":"74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9"} Feb 17 20:14:09 crc kubenswrapper[4793]: I0217 20:14:09.904062 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flnm7" podStartSLOduration=2.477557439 podStartE2EDuration="4.90404284s" podCreationTimestamp="2026-02-17 20:14:05 +0000 UTC" firstStartedPulling="2026-02-17 20:14:06.824629297 +0000 UTC m=+322.116327648" lastFinishedPulling="2026-02-17 20:14:09.251114748 +0000 UTC m=+324.542813049" observedRunningTime="2026-02-17 20:14:09.901922174 +0000 UTC m=+325.193620495" watchObservedRunningTime="2026-02-17 20:14:09.90404284 +0000 UTC m=+325.195741171" Feb 17 20:14:10 crc kubenswrapper[4793]: I0217 20:14:10.858168 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qhhj" event={"ID":"d4103f9c-b66f-4042-aa7f-044dc702eb2b","Type":"ContainerStarted","Data":"622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75"} Feb 17 20:14:10 crc kubenswrapper[4793]: I0217 20:14:10.860063 4793 generic.go:334] "Generic (PLEG): container finished" podID="47b71db7-a3b0-4b59-97ed-5d99367b7241" containerID="6476ac171f8f21ad85b8f1d54bf90ff3afc5e561e66bdda0415b89444a1ad8fa" exitCode=0 Feb 17 20:14:10 crc kubenswrapper[4793]: I0217 20:14:10.860121 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcw6z" event={"ID":"47b71db7-a3b0-4b59-97ed-5d99367b7241","Type":"ContainerDied","Data":"6476ac171f8f21ad85b8f1d54bf90ff3afc5e561e66bdda0415b89444a1ad8fa"} Feb 17 20:14:10 crc kubenswrapper[4793]: I0217 20:14:10.880788 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8qhhj" podStartSLOduration=2.464630914 podStartE2EDuration="3.880771518s" podCreationTimestamp="2026-02-17 20:14:07 +0000 UTC" firstStartedPulling="2026-02-17 20:14:08.839642584 +0000 UTC m=+324.131340895" lastFinishedPulling="2026-02-17 20:14:10.255783168 +0000 UTC m=+325.547481499" observedRunningTime="2026-02-17 20:14:10.879122184 +0000 UTC m=+326.170820495" watchObservedRunningTime="2026-02-17 20:14:10.880771518 +0000 UTC m=+326.172469829" Feb 17 20:14:11 crc kubenswrapper[4793]: I0217 20:14:11.868783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zcw6z" event={"ID":"47b71db7-a3b0-4b59-97ed-5d99367b7241","Type":"ContainerStarted","Data":"32d0e4dbc389d9b0b522ba7f8a1fe068cf54a94cf952ee18ac6a6006e1ec7d87"} Feb 17 20:14:11 crc kubenswrapper[4793]: I0217 20:14:11.892604 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zcw6z" podStartSLOduration=2.503438566 podStartE2EDuration="4.892585902s" podCreationTimestamp="2026-02-17 20:14:07 +0000 UTC" firstStartedPulling="2026-02-17 20:14:08.834741487 +0000 UTC m=+324.126439798" lastFinishedPulling="2026-02-17 20:14:11.223888823 +0000 UTC m=+326.515587134" observedRunningTime="2026-02-17 20:14:11.891664418 +0000 UTC m=+327.183362739" watchObservedRunningTime="2026-02-17 20:14:11.892585902 +0000 UTC m=+327.184284213" Feb 17 20:14:15 crc kubenswrapper[4793]: I0217 20:14:15.224292 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:15 crc kubenswrapper[4793]: I0217 20:14:15.224736 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:15 crc kubenswrapper[4793]: I0217 20:14:15.283345 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:15 crc kubenswrapper[4793]: I0217 20:14:15.428870 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:15 crc kubenswrapper[4793]: I0217 20:14:15.428956 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:15 crc kubenswrapper[4793]: I0217 20:14:15.942232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-thhdr" Feb 17 20:14:16 crc kubenswrapper[4793]: I0217 20:14:16.460807 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flnm7" podUID="c15a296a-5abb-4392-b18b-b63e0a67a8c7" containerName="registry-server" probeResult="failure" output=< Feb 17 20:14:16 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:14:16 crc kubenswrapper[4793]: > Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.652893 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.653308 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.701783 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.821539 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.822180 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.862577 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.949333 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zcw6z" Feb 17 20:14:17 crc kubenswrapper[4793]: I0217 20:14:17.963924 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:14:20 crc kubenswrapper[4793]: I0217 20:14:20.102136 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:14:20 crc kubenswrapper[4793]: I0217 20:14:20.102212 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:14:25 crc kubenswrapper[4793]: I0217 20:14:25.471120 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:25 crc kubenswrapper[4793]: I0217 20:14:25.510980 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flnm7" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.756355 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdr7v"] Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.757965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.822102 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdr7v"] Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.866619 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/557c134a-166a-4cdc-9ae1-dffdeaac4640-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.866675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/557c134a-166a-4cdc-9ae1-dffdeaac4640-trusted-ca\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.866722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-registry-tls\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.866889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/557c134a-166a-4cdc-9ae1-dffdeaac4640-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.866943 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn7d\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-kube-api-access-bwn7d\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.867006 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-bound-sa-token\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.867032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/557c134a-166a-4cdc-9ae1-dffdeaac4640-registry-certificates\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.867093 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.905935 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968499 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/557c134a-166a-4cdc-9ae1-dffdeaac4640-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968569 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/557c134a-166a-4cdc-9ae1-dffdeaac4640-trusted-ca\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968599 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-registry-tls\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968635 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/557c134a-166a-4cdc-9ae1-dffdeaac4640-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968653 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwn7d\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-kube-api-access-bwn7d\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968677 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-bound-sa-token\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.968755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/557c134a-166a-4cdc-9ae1-dffdeaac4640-registry-certificates\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.970211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/557c134a-166a-4cdc-9ae1-dffdeaac4640-registry-certificates\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.971229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/557c134a-166a-4cdc-9ae1-dffdeaac4640-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.972164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/557c134a-166a-4cdc-9ae1-dffdeaac4640-trusted-ca\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.975337 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/557c134a-166a-4cdc-9ae1-dffdeaac4640-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.984969 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-registry-tls\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.989832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwn7d\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-kube-api-access-bwn7d\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:38 crc kubenswrapper[4793]: I0217 20:14:38.992864 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/557c134a-166a-4cdc-9ae1-dffdeaac4640-bound-sa-token\") pod \"image-registry-66df7c8f76-qdr7v\" (UID: \"557c134a-166a-4cdc-9ae1-dffdeaac4640\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:39 crc kubenswrapper[4793]: I0217 20:14:39.076198 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:39 crc kubenswrapper[4793]: I0217 20:14:39.271680 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdr7v"] Feb 17 20:14:40 crc kubenswrapper[4793]: I0217 20:14:40.039243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" event={"ID":"557c134a-166a-4cdc-9ae1-dffdeaac4640","Type":"ContainerStarted","Data":"c6a7932ee5b410e96c635f67c03f4627d4f5a5f825ba6edd2dc0d49bc6dba470"} Feb 17 20:14:40 crc kubenswrapper[4793]: I0217 20:14:40.039583 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" event={"ID":"557c134a-166a-4cdc-9ae1-dffdeaac4640","Type":"ContainerStarted","Data":"27070ade1ca9914ad7b2c4aa1dcc9c46e175d4e1af6066e14ae66205c2c8b001"} Feb 17 20:14:40 crc kubenswrapper[4793]: I0217 20:14:40.039600 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:40 crc kubenswrapper[4793]: I0217 20:14:40.064783 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" podStartSLOduration=2.064754744 podStartE2EDuration="2.064754744s" podCreationTimestamp="2026-02-17 20:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:14:40.060943037 +0000 UTC m=+355.352641358" watchObservedRunningTime="2026-02-17 20:14:40.064754744 +0000 UTC m=+355.356453095" Feb 17 20:14:50 crc kubenswrapper[4793]: I0217 20:14:50.102109 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:14:50 crc kubenswrapper[4793]: I0217 20:14:50.102859 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:14:59 crc kubenswrapper[4793]: I0217 20:14:59.082475 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qdr7v" Feb 17 20:14:59 crc kubenswrapper[4793]: I0217 20:14:59.136825 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8l7nb"] Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.166106 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq"] Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.167635 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.170054 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq"] Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.174104 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.174295 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.289999 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4nb\" (UniqueName: \"kubernetes.io/projected/78ff4728-b246-4b6c-ae17-a509146ee214-kube-api-access-wg4nb\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.290083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78ff4728-b246-4b6c-ae17-a509146ee214-secret-volume\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.290121 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78ff4728-b246-4b6c-ae17-a509146ee214-config-volume\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.391181 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78ff4728-b246-4b6c-ae17-a509146ee214-secret-volume\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.392500 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78ff4728-b246-4b6c-ae17-a509146ee214-config-volume\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.392575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4nb\" (UniqueName: \"kubernetes.io/projected/78ff4728-b246-4b6c-ae17-a509146ee214-kube-api-access-wg4nb\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.393588 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78ff4728-b246-4b6c-ae17-a509146ee214-config-volume\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.397384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78ff4728-b246-4b6c-ae17-a509146ee214-secret-volume\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.407524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4nb\" (UniqueName: \"kubernetes.io/projected/78ff4728-b246-4b6c-ae17-a509146ee214-kube-api-access-wg4nb\") pod \"collect-profiles-29522655-vshxq\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.490768 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:00 crc kubenswrapper[4793]: I0217 20:15:00.899185 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq"] Feb 17 20:15:00 crc kubenswrapper[4793]: W0217 20:15:00.906856 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78ff4728_b246_4b6c_ae17_a509146ee214.slice/crio-d69f408de1cf85131f514b3b6c970dd4395f69cb3b4a098b93887976c38a3d2f WatchSource:0}: Error finding container d69f408de1cf85131f514b3b6c970dd4395f69cb3b4a098b93887976c38a3d2f: Status 404 returned error can't find the container with id d69f408de1cf85131f514b3b6c970dd4395f69cb3b4a098b93887976c38a3d2f Feb 17 20:15:01 crc kubenswrapper[4793]: I0217 20:15:01.195171 4793 generic.go:334] "Generic (PLEG): container finished" podID="78ff4728-b246-4b6c-ae17-a509146ee214" containerID="c274383356b493a5c808da1818e35b1a1248aeb58c36df86b11924e2d70cdfd7" exitCode=0 Feb 17 20:15:01 crc kubenswrapper[4793]: I0217 20:15:01.195243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" event={"ID":"78ff4728-b246-4b6c-ae17-a509146ee214","Type":"ContainerDied","Data":"c274383356b493a5c808da1818e35b1a1248aeb58c36df86b11924e2d70cdfd7"} Feb 17 20:15:01 crc kubenswrapper[4793]: I0217 20:15:01.195302 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" event={"ID":"78ff4728-b246-4b6c-ae17-a509146ee214","Type":"ContainerStarted","Data":"d69f408de1cf85131f514b3b6c970dd4395f69cb3b4a098b93887976c38a3d2f"} Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.397010 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.522954 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78ff4728-b246-4b6c-ae17-a509146ee214-config-volume\") pod \"78ff4728-b246-4b6c-ae17-a509146ee214\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.523128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg4nb\" (UniqueName: \"kubernetes.io/projected/78ff4728-b246-4b6c-ae17-a509146ee214-kube-api-access-wg4nb\") pod \"78ff4728-b246-4b6c-ae17-a509146ee214\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.523155 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78ff4728-b246-4b6c-ae17-a509146ee214-secret-volume\") pod \"78ff4728-b246-4b6c-ae17-a509146ee214\" (UID: \"78ff4728-b246-4b6c-ae17-a509146ee214\") " Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.523560 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78ff4728-b246-4b6c-ae17-a509146ee214-config-volume" (OuterVolumeSpecName: "config-volume") pod "78ff4728-b246-4b6c-ae17-a509146ee214" (UID: "78ff4728-b246-4b6c-ae17-a509146ee214"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.528358 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ff4728-b246-4b6c-ae17-a509146ee214-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "78ff4728-b246-4b6c-ae17-a509146ee214" (UID: "78ff4728-b246-4b6c-ae17-a509146ee214"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.528536 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ff4728-b246-4b6c-ae17-a509146ee214-kube-api-access-wg4nb" (OuterVolumeSpecName: "kube-api-access-wg4nb") pod "78ff4728-b246-4b6c-ae17-a509146ee214" (UID: "78ff4728-b246-4b6c-ae17-a509146ee214"). InnerVolumeSpecName "kube-api-access-wg4nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.626074 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg4nb\" (UniqueName: \"kubernetes.io/projected/78ff4728-b246-4b6c-ae17-a509146ee214-kube-api-access-wg4nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.626112 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78ff4728-b246-4b6c-ae17-a509146ee214-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:02 crc kubenswrapper[4793]: I0217 20:15:02.626121 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78ff4728-b246-4b6c-ae17-a509146ee214-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:03 crc kubenswrapper[4793]: I0217 20:15:03.207010 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" Feb 17 20:15:03 crc kubenswrapper[4793]: I0217 20:15:03.206998 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq" event={"ID":"78ff4728-b246-4b6c-ae17-a509146ee214","Type":"ContainerDied","Data":"d69f408de1cf85131f514b3b6c970dd4395f69cb3b4a098b93887976c38a3d2f"} Feb 17 20:15:03 crc kubenswrapper[4793]: I0217 20:15:03.207306 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69f408de1cf85131f514b3b6c970dd4395f69cb3b4a098b93887976c38a3d2f" Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.102679 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.103322 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.103397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.104474 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a474aac3a35cef45f7adaa960db8f39172d3e9c658e5e9a60808d964d835508"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.104597 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://3a474aac3a35cef45f7adaa960db8f39172d3e9c658e5e9a60808d964d835508" gracePeriod=600 Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.351269 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="3a474aac3a35cef45f7adaa960db8f39172d3e9c658e5e9a60808d964d835508" exitCode=0 Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.351371 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"3a474aac3a35cef45f7adaa960db8f39172d3e9c658e5e9a60808d964d835508"} Feb 17 20:15:20 crc kubenswrapper[4793]: I0217 20:15:20.351793 4793 scope.go:117] "RemoveContainer" containerID="c883db680c5e015621e98e5b3c06351d9dd6d33ea28b502241344af8df0fd1e7" Feb 17 20:15:21 crc kubenswrapper[4793]: I0217 20:15:21.360439 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"b2f41059818507a8af04391a12fd3b72c25fff60681b12b730e9f673d56662ed"} Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.207572 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" podUID="bdfa45f5-3f15-4f20-823d-17b08bd674d7" containerName="registry" containerID="cri-o://56d1ae8a8a71ba2f8bbc211db7f4b84847bebd19b8f1abc293de91298c8f2a00" gracePeriod=30 Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.381023 4793 generic.go:334] "Generic (PLEG): container finished" podID="bdfa45f5-3f15-4f20-823d-17b08bd674d7" containerID="56d1ae8a8a71ba2f8bbc211db7f4b84847bebd19b8f1abc293de91298c8f2a00" exitCode=0 Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.381088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" event={"ID":"bdfa45f5-3f15-4f20-823d-17b08bd674d7","Type":"ContainerDied","Data":"56d1ae8a8a71ba2f8bbc211db7f4b84847bebd19b8f1abc293de91298c8f2a00"} Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.637441 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.741571 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-bound-sa-token\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.741734 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-tls\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.742033 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.742124 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfa45f5-3f15-4f20-823d-17b08bd674d7-ca-trust-extracted\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.742205 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt5x8\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-kube-api-access-dt5x8\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.742269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfa45f5-3f15-4f20-823d-17b08bd674d7-installation-pull-secrets\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.742310 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-trusted-ca\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.742363 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-certificates\") pod \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\" (UID: \"bdfa45f5-3f15-4f20-823d-17b08bd674d7\") " Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.744173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.744475 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.750729 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfa45f5-3f15-4f20-823d-17b08bd674d7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.750893 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.752068 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-kube-api-access-dt5x8" (OuterVolumeSpecName: "kube-api-access-dt5x8") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "kube-api-access-dt5x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.752385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.759284 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.771528 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfa45f5-3f15-4f20-823d-17b08bd674d7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bdfa45f5-3f15-4f20-823d-17b08bd674d7" (UID: "bdfa45f5-3f15-4f20-823d-17b08bd674d7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844233 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt5x8\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-kube-api-access-dt5x8\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844285 4793 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdfa45f5-3f15-4f20-823d-17b08bd674d7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844305 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844325 4793 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844345 4793 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844363 4793 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdfa45f5-3f15-4f20-823d-17b08bd674d7-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:24 crc kubenswrapper[4793]: I0217 20:15:24.844380 4793 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdfa45f5-3f15-4f20-823d-17b08bd674d7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 20:15:25 crc kubenswrapper[4793]: I0217 20:15:25.388861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" event={"ID":"bdfa45f5-3f15-4f20-823d-17b08bd674d7","Type":"ContainerDied","Data":"625a889e00d216fe75df42f545fef37e0cb1e41adf8feb59a4d694b3580b7f11"} Feb 17 20:15:25 crc kubenswrapper[4793]: I0217 20:15:25.388920 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8l7nb" Feb 17 20:15:25 crc kubenswrapper[4793]: I0217 20:15:25.388927 4793 scope.go:117] "RemoveContainer" containerID="56d1ae8a8a71ba2f8bbc211db7f4b84847bebd19b8f1abc293de91298c8f2a00" Feb 17 20:15:25 crc kubenswrapper[4793]: I0217 20:15:25.431974 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8l7nb"] Feb 17 20:15:25 crc kubenswrapper[4793]: I0217 20:15:25.442468 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8l7nb"] Feb 17 20:15:25 crc kubenswrapper[4793]: I0217 20:15:25.547763 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfa45f5-3f15-4f20-823d-17b08bd674d7" path="/var/lib/kubelet/pods/bdfa45f5-3f15-4f20-823d-17b08bd674d7/volumes" Feb 17 20:17:20 crc kubenswrapper[4793]: I0217 20:17:20.102444 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:17:20 crc kubenswrapper[4793]: I0217 20:17:20.103280 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:17:50 crc kubenswrapper[4793]: I0217 20:17:50.101871 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:17:50 crc kubenswrapper[4793]: I0217 20:17:50.102569 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.101997 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.102597 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.102761 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.103731 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2f41059818507a8af04391a12fd3b72c25fff60681b12b730e9f673d56662ed"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.103861 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://b2f41059818507a8af04391a12fd3b72c25fff60681b12b730e9f673d56662ed" gracePeriod=600 Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.557477 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="b2f41059818507a8af04391a12fd3b72c25fff60681b12b730e9f673d56662ed" exitCode=0 Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.557658 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"b2f41059818507a8af04391a12fd3b72c25fff60681b12b730e9f673d56662ed"} Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.558112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"c6dd2a040783fcbbb25effeb44da963123b5a74b9553a93ac26a3ede471dffb7"} Feb 17 20:18:20 crc kubenswrapper[4793]: I0217 20:18:20.558142 4793 scope.go:117] "RemoveContainer" containerID="3a474aac3a35cef45f7adaa960db8f39172d3e9c658e5e9a60808d964d835508" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.262233 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wd49b"] Feb 17 20:18:23 crc kubenswrapper[4793]: E0217 20:18:23.262949 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfa45f5-3f15-4f20-823d-17b08bd674d7" containerName="registry" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.262961 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfa45f5-3f15-4f20-823d-17b08bd674d7" containerName="registry" Feb 17 20:18:23 crc kubenswrapper[4793]: E0217 20:18:23.262977 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ff4728-b246-4b6c-ae17-a509146ee214" containerName="collect-profiles" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.262983 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ff4728-b246-4b6c-ae17-a509146ee214" containerName="collect-profiles" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.263073 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfa45f5-3f15-4f20-823d-17b08bd674d7" containerName="registry" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.263083 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ff4728-b246-4b6c-ae17-a509146ee214" containerName="collect-profiles" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.263905 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.267352 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nsdsn" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.268137 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.268306 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mdbrh"] Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.268846 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.269074 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mdbrh" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.276169 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-l2jqv" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.278861 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8db5d\" (UniqueName: \"kubernetes.io/projected/bfcb2cee-b5a8-43c6-adf3-ef142c9d7427-kube-api-access-8db5d\") pod \"cert-manager-cainjector-cf98fcc89-wd49b\" (UID: \"bfcb2cee-b5a8-43c6-adf3-ef142c9d7427\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.280095 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mdbrh"] Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.295144 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6mcc4"] Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.296044 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.299998 4793 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-df7t5" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.310102 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6mcc4"] Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.322274 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wd49b"] Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.379637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwr9x\" (UniqueName: \"kubernetes.io/projected/7e87d992-edd2-4e13-a457-63ee57c8db27-kube-api-access-bwr9x\") pod \"cert-manager-webhook-687f57d79b-6mcc4\" (UID: \"7e87d992-edd2-4e13-a457-63ee57c8db27\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.379756 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8db5d\" (UniqueName: \"kubernetes.io/projected/bfcb2cee-b5a8-43c6-adf3-ef142c9d7427-kube-api-access-8db5d\") pod \"cert-manager-cainjector-cf98fcc89-wd49b\" (UID: \"bfcb2cee-b5a8-43c6-adf3-ef142c9d7427\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.379793 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4cw\" (UniqueName: \"kubernetes.io/projected/7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d-kube-api-access-sz4cw\") pod \"cert-manager-858654f9db-mdbrh\" (UID: \"7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d\") " pod="cert-manager/cert-manager-858654f9db-mdbrh" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.398758 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8db5d\" (UniqueName: \"kubernetes.io/projected/bfcb2cee-b5a8-43c6-adf3-ef142c9d7427-kube-api-access-8db5d\") pod \"cert-manager-cainjector-cf98fcc89-wd49b\" (UID: \"bfcb2cee-b5a8-43c6-adf3-ef142c9d7427\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.481060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwr9x\" (UniqueName: \"kubernetes.io/projected/7e87d992-edd2-4e13-a457-63ee57c8db27-kube-api-access-bwr9x\") pod \"cert-manager-webhook-687f57d79b-6mcc4\" (UID: \"7e87d992-edd2-4e13-a457-63ee57c8db27\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.481111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4cw\" (UniqueName: \"kubernetes.io/projected/7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d-kube-api-access-sz4cw\") pod \"cert-manager-858654f9db-mdbrh\" (UID: \"7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d\") " pod="cert-manager/cert-manager-858654f9db-mdbrh" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.496129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwr9x\" (UniqueName: \"kubernetes.io/projected/7e87d992-edd2-4e13-a457-63ee57c8db27-kube-api-access-bwr9x\") pod \"cert-manager-webhook-687f57d79b-6mcc4\" (UID: \"7e87d992-edd2-4e13-a457-63ee57c8db27\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.496248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4cw\" (UniqueName: \"kubernetes.io/projected/7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d-kube-api-access-sz4cw\") pod \"cert-manager-858654f9db-mdbrh\" (UID: \"7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d\") " pod="cert-manager/cert-manager-858654f9db-mdbrh" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.582237 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.593860 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mdbrh" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.612104 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.797878 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mdbrh"] Feb 17 20:18:23 crc kubenswrapper[4793]: I0217 20:18:23.809540 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:18:24 crc kubenswrapper[4793]: I0217 20:18:24.059751 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wd49b"] Feb 17 20:18:24 crc kubenswrapper[4793]: W0217 20:18:24.062412 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcb2cee_b5a8_43c6_adf3_ef142c9d7427.slice/crio-86b6aef76d64721deb89ac59be3c84bdb99ab333ec44b0e60a59a10dae677642 WatchSource:0}: Error finding container 86b6aef76d64721deb89ac59be3c84bdb99ab333ec44b0e60a59a10dae677642: Status 404 returned error can't find the container with id 86b6aef76d64721deb89ac59be3c84bdb99ab333ec44b0e60a59a10dae677642 Feb 17 20:18:24 crc kubenswrapper[4793]: I0217 20:18:24.072277 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6mcc4"] Feb 17 20:18:24 crc kubenswrapper[4793]: W0217 20:18:24.077393 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e87d992_edd2_4e13_a457_63ee57c8db27.slice/crio-839139a23366c7516dccaa8f8cf471278ec346dde560b5fb7a5af400ed9b8335 WatchSource:0}: Error finding container 839139a23366c7516dccaa8f8cf471278ec346dde560b5fb7a5af400ed9b8335: Status 404 returned error can't find the container with id 839139a23366c7516dccaa8f8cf471278ec346dde560b5fb7a5af400ed9b8335 Feb 17 20:18:24 crc kubenswrapper[4793]: I0217 20:18:24.585914 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" event={"ID":"bfcb2cee-b5a8-43c6-adf3-ef142c9d7427","Type":"ContainerStarted","Data":"86b6aef76d64721deb89ac59be3c84bdb99ab333ec44b0e60a59a10dae677642"} Feb 17 20:18:24 crc kubenswrapper[4793]: I0217 20:18:24.588018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mdbrh" event={"ID":"7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d","Type":"ContainerStarted","Data":"f9686a73f9d50827d583bd7c585602962d4e48404ff3c58e8776f3687c5967e4"} Feb 17 20:18:24 crc kubenswrapper[4793]: I0217 20:18:24.589206 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" event={"ID":"7e87d992-edd2-4e13-a457-63ee57c8db27","Type":"ContainerStarted","Data":"839139a23366c7516dccaa8f8cf471278ec346dde560b5fb7a5af400ed9b8335"} Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.614351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mdbrh" event={"ID":"7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d","Type":"ContainerStarted","Data":"f7a4c9262e531bf7404d5f690bbf7f3fc0cc9dd83a23dd1ceb4ed45ab418318d"} Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.617184 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" event={"ID":"bfcb2cee-b5a8-43c6-adf3-ef142c9d7427","Type":"ContainerStarted","Data":"78000bd6ac23c94f7831c99ed60fda8419f148933e571091f7c641a65d5ca3c2"} Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.618623 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" event={"ID":"7e87d992-edd2-4e13-a457-63ee57c8db27","Type":"ContainerStarted","Data":"57aae629b23bcbfbf06e68566f306b1fe90350d35046904bb5c869b1ad9b366a"} Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.619012 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.631031 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mdbrh" podStartSLOduration=1.88832492 podStartE2EDuration="5.630997838s" podCreationTimestamp="2026-02-17 20:18:23 +0000 UTC" firstStartedPulling="2026-02-17 20:18:23.809302709 +0000 UTC m=+579.101001020" lastFinishedPulling="2026-02-17 20:18:27.551975627 +0000 UTC m=+582.843673938" observedRunningTime="2026-02-17 20:18:28.629568224 +0000 UTC m=+583.921266535" watchObservedRunningTime="2026-02-17 20:18:28.630997838 +0000 UTC m=+583.922696199" Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.650788 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" podStartSLOduration=2.046240165 podStartE2EDuration="5.650771106s" podCreationTimestamp="2026-02-17 20:18:23 +0000 UTC" firstStartedPulling="2026-02-17 20:18:24.079449154 +0000 UTC m=+579.371147495" lastFinishedPulling="2026-02-17 20:18:27.683980125 +0000 UTC m=+582.975678436" observedRunningTime="2026-02-17 20:18:28.649936406 +0000 UTC m=+583.941634727" watchObservedRunningTime="2026-02-17 20:18:28.650771106 +0000 UTC m=+583.942469417" Feb 17 20:18:28 crc kubenswrapper[4793]: I0217 20:18:28.663106 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wd49b" podStartSLOduration=2.117587639 podStartE2EDuration="5.663091174s" podCreationTimestamp="2026-02-17 20:18:23 +0000 UTC" firstStartedPulling="2026-02-17 20:18:24.065703522 +0000 UTC m=+579.357401843" lastFinishedPulling="2026-02-17 20:18:27.611207067 +0000 UTC m=+582.902905378" observedRunningTime="2026-02-17 20:18:28.66254808 +0000 UTC m=+583.954246421" watchObservedRunningTime="2026-02-17 20:18:28.663091174 +0000 UTC m=+583.954789485" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.448499 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n2fmv"] Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.449842 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-controller" containerID="cri-o://09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.449892 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="sbdb" containerID="cri-o://f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.450025 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="nbdb" containerID="cri-o://a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.450133 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="northd" containerID="cri-o://61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.450217 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.450771 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-acl-logging" containerID="cri-o://5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.456653 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-node" containerID="cri-o://3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.498952 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" containerID="cri-o://66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" gracePeriod=30 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.615535 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-6mcc4" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.650398 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovnkube-controller/3.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.652395 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovn-acl-logging/0.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.652953 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovn-controller/0.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653267 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" exitCode=0 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653296 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" exitCode=0 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653307 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" exitCode=0 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653316 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" exitCode=143 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653325 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" exitCode=143 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653373 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65"} Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653453 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9"} Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653474 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0"} Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e"} Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653498 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451"} Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.653519 4793 scope.go:117] "RemoveContainer" containerID="51026d5eb5b88801a1918c9ceb143b71de65a9e53b06f2f35ca80f9bc1b32ddd" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.655190 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/2.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.655656 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/1.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.655691 4793 generic.go:334] "Generic (PLEG): container finished" podID="b2b13cca-b775-4fc5-8ad8-41bfd70c857c" containerID="98e9297cdbc34f5f60848019a91aaf3e79513d8c064e8b32e68ac7bb740e80ad" exitCode=2 Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.655726 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerDied","Data":"98e9297cdbc34f5f60848019a91aaf3e79513d8c064e8b32e68ac7bb740e80ad"} Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.656146 4793 scope.go:117] "RemoveContainer" containerID="98e9297cdbc34f5f60848019a91aaf3e79513d8c064e8b32e68ac7bb740e80ad" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.656297 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ztwxl_openshift-multus(b2b13cca-b775-4fc5-8ad8-41bfd70c857c)\"" pod="openshift-multus/multus-ztwxl" podUID="b2b13cca-b775-4fc5-8ad8-41bfd70c857c" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.765343 4793 scope.go:117] "RemoveContainer" containerID="beba5ad7f813f286d6f37a7d6983827115e97d4dbcc5c17c23d9f62a2789faae" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.820398 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovn-acl-logging/0.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.821145 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovn-controller/0.log" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.822158 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.875657 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4kwsl"] Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876010 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876043 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876066 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-acl-logging" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876081 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-acl-logging" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876101 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876114 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876131 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-node" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-node" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876164 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876178 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876198 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876211 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876226 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876239 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876260 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="sbdb" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876274 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="sbdb" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876293 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="nbdb" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876306 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="nbdb" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876322 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="northd" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876335 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="northd" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876355 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kubecfg-setup" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876368 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kubecfg-setup" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876538 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876562 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="sbdb" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876581 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="nbdb" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876606 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876625 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="northd" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876645 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876659 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-node" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876678 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876698 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovn-acl-logging" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876770 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876956 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.876973 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: E0217 20:18:33.876989 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.877002 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.877173 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.877548 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerName="ovnkube-controller" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.881901 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.928943 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.928987 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-systemd-units\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929023 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-env-overrides\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929050 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-kubelet\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929066 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-netns\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929086 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-ovn-kubernetes\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929080 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929110 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovn-node-metrics-cert\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-log-socket\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929250 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929295 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929300 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-script-lib\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929351 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-node-log\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929405 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-log-socket" (OuterVolumeSpecName: "log-socket") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929408 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-node-log" (OuterVolumeSpecName: "node-log") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929488 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929552 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-slash\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929666 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929671 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-slash" (OuterVolumeSpecName: "host-slash") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929812 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929893 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-ovn\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.929967 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930023 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-config\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930449 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-netd\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930501 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-kube-api-access-w4qp9\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930519 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-var-lib-openvswitch\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930547 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-systemd\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930585 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-openvswitch\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930613 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-etc-openvswitch\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-bin\") pod \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\" (UID: \"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2\") " Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930626 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930712 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930755 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-ovn\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930802 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-systemd-units\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-cni-bin\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930834 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-run-netns\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930855 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-var-lib-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovnkube-config\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930921 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69qk\" (UniqueName: \"kubernetes.io/projected/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-kube-api-access-k69qk\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930937 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-run-ovn-kubernetes\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930954 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-log-socket\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovn-node-metrics-cert\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930994 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovnkube-script-lib\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-kubelet\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.930822 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931018 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931023 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-slash\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931057 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931388 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-etc-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931430 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-node-log\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-systemd\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931529 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-env-overrides\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-cni-netd\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931678 4793 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931732 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931758 4793 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931787 4793 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931813 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931837 4793 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931861 4793 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931885 4793 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931903 4793 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931924 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931942 4793 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931962 4793 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.931982 4793 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.932001 4793 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.932022 4793 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.932041 4793 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.932060 4793 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.937108 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-kube-api-access-w4qp9" (OuterVolumeSpecName: "kube-api-access-w4qp9") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "kube-api-access-w4qp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.937195 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:18:33 crc kubenswrapper[4793]: I0217 20:18:33.942765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" (UID: "4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032477 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-etc-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032554 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-node-log\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032606 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-systemd\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032643 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-env-overrides\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032644 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-etc-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032681 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-cni-netd\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-systemd\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-ovn\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032887 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-node-log\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-systemd-units\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-systemd-units\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-ovn\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032858 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-cni-netd\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.032993 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-cni-bin\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033052 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-run-netns\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-cni-bin\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-run-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033144 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-run-netns\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-var-lib-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033204 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-var-lib-openvswitch\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033293 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033314 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovnkube-config\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69qk\" (UniqueName: \"kubernetes.io/projected/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-kube-api-access-k69qk\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-run-ovn-kubernetes\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-log-socket\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovn-node-metrics-cert\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033515 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovnkube-script-lib\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033545 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-kubelet\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033574 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-slash\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033653 4793 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033676 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033730 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4qp9\" (UniqueName: \"kubernetes.io/projected/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2-kube-api-access-w4qp9\") on node \"crc\" DevicePath \"\"" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033801 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-slash\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033808 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-env-overrides\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033907 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-run-ovn-kubernetes\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-host-kubelet\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.033911 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-log-socket\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.034253 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovnkube-config\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.034825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovnkube-script-lib\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.038579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-ovn-node-metrics-cert\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.062025 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69qk\" (UniqueName: \"kubernetes.io/projected/2d7a013b-760d-4a4b-ac6a-2dd12f5bae41-kube-api-access-k69qk\") pod \"ovnkube-node-4kwsl\" (UID: \"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41\") " pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.199804 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:34 crc kubenswrapper[4793]: W0217 20:18:34.226983 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d7a013b_760d_4a4b_ac6a_2dd12f5bae41.slice/crio-5be4421f0a4ecbec6d77bc807fe139592ffd620d21ec12b639a6a013bd6285e3 WatchSource:0}: Error finding container 5be4421f0a4ecbec6d77bc807fe139592ffd620d21ec12b639a6a013bd6285e3: Status 404 returned error can't find the container with id 5be4421f0a4ecbec6d77bc807fe139592ffd620d21ec12b639a6a013bd6285e3 Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.662677 4793 generic.go:334] "Generic (PLEG): container finished" podID="2d7a013b-760d-4a4b-ac6a-2dd12f5bae41" containerID="3aaeadf6fd7bd72f0d08b52fdd63d2d377506e7c66ab8641065cfb7ee0d60bcc" exitCode=0 Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.662788 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerDied","Data":"3aaeadf6fd7bd72f0d08b52fdd63d2d377506e7c66ab8641065cfb7ee0d60bcc"} Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.662818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"5be4421f0a4ecbec6d77bc807fe139592ffd620d21ec12b639a6a013bd6285e3"} Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.664429 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/2.log" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.668698 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovn-acl-logging/0.log" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669158 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n2fmv_4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/ovn-controller/0.log" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669449 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" exitCode=0 Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669478 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" exitCode=0 Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669490 4793 generic.go:334] "Generic (PLEG): container finished" podID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" containerID="61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" exitCode=0 Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669509 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7"} Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669549 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314"} Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058"} Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fmv" event={"ID":"4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2","Type":"ContainerDied","Data":"f9b142c9d7d5d063f7f322fe857382f60a4c54c27c62ab30578b42d0418823d9"} Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.669592 4793 scope.go:117] "RemoveContainer" containerID="66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.687852 4793 scope.go:117] "RemoveContainer" containerID="f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.716814 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n2fmv"] Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.719909 4793 scope.go:117] "RemoveContainer" containerID="a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.722069 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n2fmv"] Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.734517 4793 scope.go:117] "RemoveContainer" containerID="61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.763309 4793 scope.go:117] "RemoveContainer" containerID="32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.792638 4793 scope.go:117] "RemoveContainer" containerID="3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.816730 4793 scope.go:117] "RemoveContainer" containerID="5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.833250 4793 scope.go:117] "RemoveContainer" containerID="09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.848121 4793 scope.go:117] "RemoveContainer" containerID="761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.866799 4793 scope.go:117] "RemoveContainer" containerID="66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.868573 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": container with ID starting with 66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65 not found: ID does not exist" containerID="66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.868618 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65"} err="failed to get container status \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": rpc error: code = NotFound desc = could not find container \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": container with ID starting with 66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.868650 4793 scope.go:117] "RemoveContainer" containerID="f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.868947 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": container with ID starting with f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7 not found: ID does not exist" containerID="f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.868973 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7"} err="failed to get container status \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": rpc error: code = NotFound desc = could not find container \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": container with ID starting with f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.868991 4793 scope.go:117] "RemoveContainer" containerID="a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.869193 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": container with ID starting with a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314 not found: ID does not exist" containerID="a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.869217 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314"} err="failed to get container status \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": rpc error: code = NotFound desc = could not find container \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": container with ID starting with a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.869231 4793 scope.go:117] "RemoveContainer" containerID="61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.869415 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": container with ID starting with 61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058 not found: ID does not exist" containerID="61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.869448 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058"} err="failed to get container status \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": rpc error: code = NotFound desc = could not find container \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": container with ID starting with 61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.869466 4793 scope.go:117] "RemoveContainer" containerID="32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.869793 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": container with ID starting with 32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9 not found: ID does not exist" containerID="32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.869817 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9"} err="failed to get container status \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": rpc error: code = NotFound desc = could not find container \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": container with ID starting with 32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.869834 4793 scope.go:117] "RemoveContainer" containerID="3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.870071 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": container with ID starting with 3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0 not found: ID does not exist" containerID="3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870098 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0"} err="failed to get container status \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": rpc error: code = NotFound desc = could not find container \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": container with ID starting with 3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870115 4793 scope.go:117] "RemoveContainer" containerID="5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.870365 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": container with ID starting with 5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e not found: ID does not exist" containerID="5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870402 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e"} err="failed to get container status \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": rpc error: code = NotFound desc = could not find container \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": container with ID starting with 5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870419 4793 scope.go:117] "RemoveContainer" containerID="09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.870678 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": container with ID starting with 09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451 not found: ID does not exist" containerID="09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870713 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451"} err="failed to get container status \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": rpc error: code = NotFound desc = could not find container \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": container with ID starting with 09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870729 4793 scope.go:117] "RemoveContainer" containerID="761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b" Feb 17 20:18:34 crc kubenswrapper[4793]: E0217 20:18:34.870955 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": container with ID starting with 761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b not found: ID does not exist" containerID="761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870977 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b"} err="failed to get container status \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": rpc error: code = NotFound desc = could not find container \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": container with ID starting with 761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.870991 4793 scope.go:117] "RemoveContainer" containerID="66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.871236 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65"} err="failed to get container status \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": rpc error: code = NotFound desc = could not find container \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": container with ID starting with 66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.871254 4793 scope.go:117] "RemoveContainer" containerID="f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.871688 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7"} err="failed to get container status \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": rpc error: code = NotFound desc = could not find container \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": container with ID starting with f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.871738 4793 scope.go:117] "RemoveContainer" containerID="a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.872001 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314"} err="failed to get container status \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": rpc error: code = NotFound desc = could not find container \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": container with ID starting with a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.872024 4793 scope.go:117] "RemoveContainer" containerID="61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.872244 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058"} err="failed to get container status \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": rpc error: code = NotFound desc = could not find container \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": container with ID starting with 61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.872266 4793 scope.go:117] "RemoveContainer" containerID="32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.872626 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9"} err="failed to get container status \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": rpc error: code = NotFound desc = could not find container \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": container with ID starting with 32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.872646 4793 scope.go:117] "RemoveContainer" containerID="3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.873077 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0"} err="failed to get container status \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": rpc error: code = NotFound desc = could not find container \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": container with ID starting with 3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.873094 4793 scope.go:117] "RemoveContainer" containerID="5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.873392 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e"} err="failed to get container status \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": rpc error: code = NotFound desc = could not find container \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": container with ID starting with 5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.873412 4793 scope.go:117] "RemoveContainer" containerID="09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.873634 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451"} err="failed to get container status \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": rpc error: code = NotFound desc = could not find container \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": container with ID starting with 09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.873653 4793 scope.go:117] "RemoveContainer" containerID="761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.874045 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b"} err="failed to get container status \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": rpc error: code = NotFound desc = could not find container \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": container with ID starting with 761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.874066 4793 scope.go:117] "RemoveContainer" containerID="66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.874470 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65"} err="failed to get container status \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": rpc error: code = NotFound desc = could not find container \"66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65\": container with ID starting with 66f4df19967e2f69adbfc187552e434026f447271484b23db724c849c4805f65 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.874489 4793 scope.go:117] "RemoveContainer" containerID="f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.874680 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7"} err="failed to get container status \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": rpc error: code = NotFound desc = could not find container \"f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7\": container with ID starting with f3161d31a16ce66a75b2928487822f0c1e627deb1564e7f792e96420ebde1ae7 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.874715 4793 scope.go:117] "RemoveContainer" containerID="a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875105 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314"} err="failed to get container status \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": rpc error: code = NotFound desc = could not find container \"a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314\": container with ID starting with a66eb3dc7a2ab1a3e5e720718416aba285d92566aa472515bc983f0794f48314 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875125 4793 scope.go:117] "RemoveContainer" containerID="61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875355 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058"} err="failed to get container status \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": rpc error: code = NotFound desc = could not find container \"61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058\": container with ID starting with 61506b0fdf8d54ab45b280e5cf461ab69bc233211c90bc60b339eef643ebb058 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875373 4793 scope.go:117] "RemoveContainer" containerID="32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875576 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9"} err="failed to get container status \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": rpc error: code = NotFound desc = could not find container \"32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9\": container with ID starting with 32e4375a429735c20aa6093586974f36afdd51b82d65270a693d14aefe0b0ac9 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875595 4793 scope.go:117] "RemoveContainer" containerID="3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875829 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0"} err="failed to get container status \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": rpc error: code = NotFound desc = could not find container \"3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0\": container with ID starting with 3403f83a9b35ceeb1a09a45b62e25e74a10d89d309f00819598790ba6c84a0a0 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.875847 4793 scope.go:117] "RemoveContainer" containerID="5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.876040 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e"} err="failed to get container status \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": rpc error: code = NotFound desc = could not find container \"5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e\": container with ID starting with 5b08b2c72ecc14b8e31ad5d7fe12851c01bf38ff98fe406dc26218d892ee9f0e not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.876061 4793 scope.go:117] "RemoveContainer" containerID="09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.877167 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451"} err="failed to get container status \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": rpc error: code = NotFound desc = could not find container \"09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451\": container with ID starting with 09885fdf835300247d4a89f7e2ded3ab350786be47130106f4cc679a5477c451 not found: ID does not exist" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.877187 4793 scope.go:117] "RemoveContainer" containerID="761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b" Feb 17 20:18:34 crc kubenswrapper[4793]: I0217 20:18:34.880166 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b"} err="failed to get container status \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": rpc error: code = NotFound desc = could not find container \"761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b\": container with ID starting with 761392c158ecd7b6afbcad4efd90a12bf10b1279148bcaf2f51505483aff300b not found: ID does not exist" Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.551767 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2" path="/var/lib/kubelet/pods/4642df18-d6e0-4f03-a7f6-5ab2fd4bd8c2/volumes" Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.676503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"5a1727db02e6707f8acd9c91206c91ca3c048899dba6d8a509e883c4aaa8b664"} Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.676547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"42b609a90cd03d0a132d52f98d9a4553ded637a67b2e92de0a6e53f2f1a34e61"} Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.676562 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"3f1db7ada3342376b6bebc5bb826996bdfb7290060cc48c2d53fde328ce3ff37"} Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.676570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"f65fe7e2a3901ff30a5355a87d9093e7a784cb73b31e0de0a2dddd01473cdac2"} Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.676578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"7990046c55408e99277bf3d5c7a166525ac53627ba7aa871266f904f68f95afb"} Feb 17 20:18:35 crc kubenswrapper[4793]: I0217 20:18:35.676585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"eb8dfdb746fc56ca30ceab12329ba915006825c9d291c433033206038eb319f6"} Feb 17 20:18:38 crc kubenswrapper[4793]: I0217 20:18:38.705499 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"ce55261f9b71d64378f9477fed9385c22ff89fbf6e63289967d7ae594a6eb88f"} Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.730903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" event={"ID":"2d7a013b-760d-4a4b-ac6a-2dd12f5bae41","Type":"ContainerStarted","Data":"6288a57adc00bc687ccae23b32d90b9b026265d9b90601e42ed9881171610cdd"} Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.731361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.731373 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.731383 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.761279 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" podStartSLOduration=7.761261363 podStartE2EDuration="7.761261363s" podCreationTimestamp="2026-02-17 20:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:18:40.759739615 +0000 UTC m=+596.051437926" watchObservedRunningTime="2026-02-17 20:18:40.761261363 +0000 UTC m=+596.052959684" Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.780914 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:40 crc kubenswrapper[4793]: I0217 20:18:40.781905 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:18:44 crc kubenswrapper[4793]: I0217 20:18:44.538077 4793 scope.go:117] "RemoveContainer" containerID="98e9297cdbc34f5f60848019a91aaf3e79513d8c064e8b32e68ac7bb740e80ad" Feb 17 20:18:44 crc kubenswrapper[4793]: E0217 20:18:44.538552 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ztwxl_openshift-multus(b2b13cca-b775-4fc5-8ad8-41bfd70c857c)\"" pod="openshift-multus/multus-ztwxl" podUID="b2b13cca-b775-4fc5-8ad8-41bfd70c857c" Feb 17 20:18:57 crc kubenswrapper[4793]: I0217 20:18:57.539449 4793 scope.go:117] "RemoveContainer" containerID="98e9297cdbc34f5f60848019a91aaf3e79513d8c064e8b32e68ac7bb740e80ad" Feb 17 20:18:57 crc kubenswrapper[4793]: I0217 20:18:57.941368 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ztwxl_b2b13cca-b775-4fc5-8ad8-41bfd70c857c/kube-multus/2.log" Feb 17 20:18:57 crc kubenswrapper[4793]: I0217 20:18:57.941820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ztwxl" event={"ID":"b2b13cca-b775-4fc5-8ad8-41bfd70c857c","Type":"ContainerStarted","Data":"e07e3f48dd0f928ed0ae5a1ca8de469283f0e898fdefbbc6ae4b89afd47a00fe"} Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.771074 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw"] Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.772634 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.775546 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.787584 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw"] Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.889851 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.889996 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.890042 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjrr9\" (UniqueName: \"kubernetes.io/projected/c2683489-1059-4357-b2c3-832f83aae83e-kube-api-access-qjrr9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.991344 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.991432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjrr9\" (UniqueName: \"kubernetes.io/projected/c2683489-1059-4357-b2c3-832f83aae83e-kube-api-access-qjrr9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.991576 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.992182 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:00 crc kubenswrapper[4793]: I0217 20:19:00.992298 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:01 crc kubenswrapper[4793]: I0217 20:19:01.027061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjrr9\" (UniqueName: \"kubernetes.io/projected/c2683489-1059-4357-b2c3-832f83aae83e-kube-api-access-qjrr9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:01 crc kubenswrapper[4793]: I0217 20:19:01.092506 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:01 crc kubenswrapper[4793]: I0217 20:19:01.392564 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw"] Feb 17 20:19:01 crc kubenswrapper[4793]: W0217 20:19:01.400674 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2683489_1059_4357_b2c3_832f83aae83e.slice/crio-b218126beaa80caa1ecf2fe5a0b8ecfb8d7fa8b9a39c44c021e862cfbcee3327 WatchSource:0}: Error finding container b218126beaa80caa1ecf2fe5a0b8ecfb8d7fa8b9a39c44c021e862cfbcee3327: Status 404 returned error can't find the container with id b218126beaa80caa1ecf2fe5a0b8ecfb8d7fa8b9a39c44c021e862cfbcee3327 Feb 17 20:19:01 crc kubenswrapper[4793]: I0217 20:19:01.971130 4793 generic.go:334] "Generic (PLEG): container finished" podID="c2683489-1059-4357-b2c3-832f83aae83e" containerID="567b6446f291bc8e10b0dd9f8a2643fb3ea87a016d41439810a5ae78c6584b96" exitCode=0 Feb 17 20:19:01 crc kubenswrapper[4793]: I0217 20:19:01.971197 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" event={"ID":"c2683489-1059-4357-b2c3-832f83aae83e","Type":"ContainerDied","Data":"567b6446f291bc8e10b0dd9f8a2643fb3ea87a016d41439810a5ae78c6584b96"} Feb 17 20:19:01 crc kubenswrapper[4793]: I0217 20:19:01.971241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" event={"ID":"c2683489-1059-4357-b2c3-832f83aae83e","Type":"ContainerStarted","Data":"b218126beaa80caa1ecf2fe5a0b8ecfb8d7fa8b9a39c44c021e862cfbcee3327"} Feb 17 20:19:03 crc kubenswrapper[4793]: I0217 20:19:03.981642 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" event={"ID":"c2683489-1059-4357-b2c3-832f83aae83e","Type":"ContainerStarted","Data":"613fe44eb5a2cec529f9534e403ab76ee52102bbbdbf8bc6620250dfa733800b"} Feb 17 20:19:04 crc kubenswrapper[4793]: I0217 20:19:04.239349 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4kwsl" Feb 17 20:19:04 crc kubenswrapper[4793]: I0217 20:19:04.990596 4793 generic.go:334] "Generic (PLEG): container finished" podID="c2683489-1059-4357-b2c3-832f83aae83e" containerID="613fe44eb5a2cec529f9534e403ab76ee52102bbbdbf8bc6620250dfa733800b" exitCode=0 Feb 17 20:19:04 crc kubenswrapper[4793]: I0217 20:19:04.990673 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" event={"ID":"c2683489-1059-4357-b2c3-832f83aae83e","Type":"ContainerDied","Data":"613fe44eb5a2cec529f9534e403ab76ee52102bbbdbf8bc6620250dfa733800b"} Feb 17 20:19:06 crc kubenswrapper[4793]: I0217 20:19:06.004027 4793 generic.go:334] "Generic (PLEG): container finished" podID="c2683489-1059-4357-b2c3-832f83aae83e" containerID="c1fceadc462acd335ddfba7029dfca74c6fb81f8cd520a8f8f6c6bf282d7b028" exitCode=0 Feb 17 20:19:06 crc kubenswrapper[4793]: I0217 20:19:06.004146 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" event={"ID":"c2683489-1059-4357-b2c3-832f83aae83e","Type":"ContainerDied","Data":"c1fceadc462acd335ddfba7029dfca74c6fb81f8cd520a8f8f6c6bf282d7b028"} Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.341563 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.475402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-util\") pod \"c2683489-1059-4357-b2c3-832f83aae83e\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.475533 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjrr9\" (UniqueName: \"kubernetes.io/projected/c2683489-1059-4357-b2c3-832f83aae83e-kube-api-access-qjrr9\") pod \"c2683489-1059-4357-b2c3-832f83aae83e\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.475621 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-bundle\") pod \"c2683489-1059-4357-b2c3-832f83aae83e\" (UID: \"c2683489-1059-4357-b2c3-832f83aae83e\") " Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.480083 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-bundle" (OuterVolumeSpecName: "bundle") pod "c2683489-1059-4357-b2c3-832f83aae83e" (UID: "c2683489-1059-4357-b2c3-832f83aae83e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.484245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2683489-1059-4357-b2c3-832f83aae83e-kube-api-access-qjrr9" (OuterVolumeSpecName: "kube-api-access-qjrr9") pod "c2683489-1059-4357-b2c3-832f83aae83e" (UID: "c2683489-1059-4357-b2c3-832f83aae83e"). InnerVolumeSpecName "kube-api-access-qjrr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.492587 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-util" (OuterVolumeSpecName: "util") pod "c2683489-1059-4357-b2c3-832f83aae83e" (UID: "c2683489-1059-4357-b2c3-832f83aae83e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.579191 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-util\") on node \"crc\" DevicePath \"\"" Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.579265 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjrr9\" (UniqueName: \"kubernetes.io/projected/c2683489-1059-4357-b2c3-832f83aae83e-kube-api-access-qjrr9\") on node \"crc\" DevicePath \"\"" Feb 17 20:19:07 crc kubenswrapper[4793]: I0217 20:19:07.579312 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2683489-1059-4357-b2c3-832f83aae83e-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:19:08 crc kubenswrapper[4793]: I0217 20:19:08.020715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" event={"ID":"c2683489-1059-4357-b2c3-832f83aae83e","Type":"ContainerDied","Data":"b218126beaa80caa1ecf2fe5a0b8ecfb8d7fa8b9a39c44c021e862cfbcee3327"} Feb 17 20:19:08 crc kubenswrapper[4793]: I0217 20:19:08.020753 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b218126beaa80caa1ecf2fe5a0b8ecfb8d7fa8b9a39c44c021e862cfbcee3327" Feb 17 20:19:08 crc kubenswrapper[4793]: I0217 20:19:08.020794 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.536863 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd"] Feb 17 20:19:18 crc kubenswrapper[4793]: E0217 20:19:18.537751 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="pull" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.537766 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="pull" Feb 17 20:19:18 crc kubenswrapper[4793]: E0217 20:19:18.537784 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="util" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.537791 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="util" Feb 17 20:19:18 crc kubenswrapper[4793]: E0217 20:19:18.537817 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="extract" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.537825 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="extract" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.537943 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2683489-1059-4357-b2c3-832f83aae83e" containerName="extract" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.538577 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.541894 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.542114 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.544995 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-l7nwc" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.550746 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.654524 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.655434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.657739 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zlrr7" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.657970 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.661236 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.661881 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.671139 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.677901 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.714045 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5xm\" (UniqueName: \"kubernetes.io/projected/ee3dbdac-0635-42f6-909e-3ff0ca3f48f7-kube-api-access-xm5xm\") pod \"obo-prometheus-operator-68bc856cb9-stbxd\" (UID: \"ee3dbdac-0635-42f6-909e-3ff0ca3f48f7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.775979 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbtjj"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.776936 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.778943 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4kqv2" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.778961 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.799885 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbtjj"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.815318 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae7faf0a-a2b4-431b-8a63-4b890b5f5c73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt\" (UID: \"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.815567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5xm\" (UniqueName: \"kubernetes.io/projected/ee3dbdac-0635-42f6-909e-3ff0ca3f48f7-kube-api-access-xm5xm\") pod \"obo-prometheus-operator-68bc856cb9-stbxd\" (UID: \"ee3dbdac-0635-42f6-909e-3ff0ca3f48f7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.815709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1321bdf1-43e6-45ce-8d74-332b6d81c908-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4\" (UID: \"1321bdf1-43e6-45ce-8d74-332b6d81c908\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.815826 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae7faf0a-a2b4-431b-8a63-4b890b5f5c73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt\" (UID: \"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.815949 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1321bdf1-43e6-45ce-8d74-332b6d81c908-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4\" (UID: \"1321bdf1-43e6-45ce-8d74-332b6d81c908\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.853793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5xm\" (UniqueName: \"kubernetes.io/projected/ee3dbdac-0635-42f6-909e-3ff0ca3f48f7-kube-api-access-xm5xm\") pod \"obo-prometheus-operator-68bc856cb9-stbxd\" (UID: \"ee3dbdac-0635-42f6-909e-3ff0ca3f48f7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.854047 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.917917 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae7faf0a-a2b4-431b-8a63-4b890b5f5c73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt\" (UID: \"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.918501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2358cc0-c59d-4feb-9682-b6dbfc729cd8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbtjj\" (UID: \"a2358cc0-c59d-4feb-9682-b6dbfc729cd8\") " pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.918602 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1321bdf1-43e6-45ce-8d74-332b6d81c908-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4\" (UID: \"1321bdf1-43e6-45ce-8d74-332b6d81c908\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.918635 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74xr\" (UniqueName: \"kubernetes.io/projected/a2358cc0-c59d-4feb-9682-b6dbfc729cd8-kube-api-access-d74xr\") pod \"observability-operator-59bdc8b94-zbtjj\" (UID: \"a2358cc0-c59d-4feb-9682-b6dbfc729cd8\") " pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.918714 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae7faf0a-a2b4-431b-8a63-4b890b5f5c73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt\" (UID: \"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.918745 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1321bdf1-43e6-45ce-8d74-332b6d81c908-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4\" (UID: \"1321bdf1-43e6-45ce-8d74-332b6d81c908\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.922006 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae7faf0a-a2b4-431b-8a63-4b890b5f5c73-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt\" (UID: \"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.922042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1321bdf1-43e6-45ce-8d74-332b6d81c908-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4\" (UID: \"1321bdf1-43e6-45ce-8d74-332b6d81c908\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.924164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae7faf0a-a2b4-431b-8a63-4b890b5f5c73-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt\" (UID: \"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.929373 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1321bdf1-43e6-45ce-8d74-332b6d81c908-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4\" (UID: \"1321bdf1-43e6-45ce-8d74-332b6d81c908\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.970704 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.979629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.989210 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jqgbz"] Feb 17 20:19:18 crc kubenswrapper[4793]: I0217 20:19:18.990356 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.001962 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-bzvb9" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.017079 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jqgbz"] Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.019789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74xr\" (UniqueName: \"kubernetes.io/projected/a2358cc0-c59d-4feb-9682-b6dbfc729cd8-kube-api-access-d74xr\") pod \"observability-operator-59bdc8b94-zbtjj\" (UID: \"a2358cc0-c59d-4feb-9682-b6dbfc729cd8\") " pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.019849 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2358cc0-c59d-4feb-9682-b6dbfc729cd8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbtjj\" (UID: \"a2358cc0-c59d-4feb-9682-b6dbfc729cd8\") " pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.025876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2358cc0-c59d-4feb-9682-b6dbfc729cd8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zbtjj\" (UID: \"a2358cc0-c59d-4feb-9682-b6dbfc729cd8\") " pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.053159 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74xr\" (UniqueName: \"kubernetes.io/projected/a2358cc0-c59d-4feb-9682-b6dbfc729cd8-kube-api-access-d74xr\") pod \"observability-operator-59bdc8b94-zbtjj\" (UID: \"a2358cc0-c59d-4feb-9682-b6dbfc729cd8\") " pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.094920 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.122283 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b19f5d08-b87f-4168-b29b-b28619987367-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jqgbz\" (UID: \"b19f5d08-b87f-4168-b29b-b28619987367\") " pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.123062 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjmk\" (UniqueName: \"kubernetes.io/projected/b19f5d08-b87f-4168-b29b-b28619987367-kube-api-access-tbjmk\") pod \"perses-operator-5bf474d74f-jqgbz\" (UID: \"b19f5d08-b87f-4168-b29b-b28619987367\") " pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.153094 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd"] Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.224235 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjmk\" (UniqueName: \"kubernetes.io/projected/b19f5d08-b87f-4168-b29b-b28619987367-kube-api-access-tbjmk\") pod \"perses-operator-5bf474d74f-jqgbz\" (UID: \"b19f5d08-b87f-4168-b29b-b28619987367\") " pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.224313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b19f5d08-b87f-4168-b29b-b28619987367-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jqgbz\" (UID: \"b19f5d08-b87f-4168-b29b-b28619987367\") " pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.225318 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b19f5d08-b87f-4168-b29b-b28619987367-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jqgbz\" (UID: \"b19f5d08-b87f-4168-b29b-b28619987367\") " pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.244562 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjmk\" (UniqueName: \"kubernetes.io/projected/b19f5d08-b87f-4168-b29b-b28619987367-kube-api-access-tbjmk\") pod \"perses-operator-5bf474d74f-jqgbz\" (UID: \"b19f5d08-b87f-4168-b29b-b28619987367\") " pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.261906 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt"] Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.319888 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.321359 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4"] Feb 17 20:19:19 crc kubenswrapper[4793]: W0217 20:19:19.325099 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1321bdf1_43e6_45ce_8d74_332b6d81c908.slice/crio-b6f32ef46e9a557aaac0cf000d05176be7ff916499668255411d74de43f7ef39 WatchSource:0}: Error finding container b6f32ef46e9a557aaac0cf000d05176be7ff916499668255411d74de43f7ef39: Status 404 returned error can't find the container with id b6f32ef46e9a557aaac0cf000d05176be7ff916499668255411d74de43f7ef39 Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.596344 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zbtjj"] Feb 17 20:19:19 crc kubenswrapper[4793]: W0217 20:19:19.605931 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2358cc0_c59d_4feb_9682_b6dbfc729cd8.slice/crio-ac9dd03384589ca06e80272cc8b9d0fae6ed51a1a92588f3b14381244eacdae6 WatchSource:0}: Error finding container ac9dd03384589ca06e80272cc8b9d0fae6ed51a1a92588f3b14381244eacdae6: Status 404 returned error can't find the container with id ac9dd03384589ca06e80272cc8b9d0fae6ed51a1a92588f3b14381244eacdae6 Feb 17 20:19:19 crc kubenswrapper[4793]: I0217 20:19:19.769888 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jqgbz"] Feb 17 20:19:19 crc kubenswrapper[4793]: W0217 20:19:19.779349 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb19f5d08_b87f_4168_b29b_b28619987367.slice/crio-d776ac9ad55e09da8869c91dbf52f9fdec449f64c2193a3595c9ebae20917ce9 WatchSource:0}: Error finding container d776ac9ad55e09da8869c91dbf52f9fdec449f64c2193a3595c9ebae20917ce9: Status 404 returned error can't find the container with id d776ac9ad55e09da8869c91dbf52f9fdec449f64c2193a3595c9ebae20917ce9 Feb 17 20:19:20 crc kubenswrapper[4793]: I0217 20:19:20.093159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" event={"ID":"ee3dbdac-0635-42f6-909e-3ff0ca3f48f7","Type":"ContainerStarted","Data":"526ce7e769153a7633aa327f94c8d042ce81da5084bc1bbd06ce570f3c1e3945"} Feb 17 20:19:20 crc kubenswrapper[4793]: I0217 20:19:20.095132 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" event={"ID":"b19f5d08-b87f-4168-b29b-b28619987367","Type":"ContainerStarted","Data":"d776ac9ad55e09da8869c91dbf52f9fdec449f64c2193a3595c9ebae20917ce9"} Feb 17 20:19:20 crc kubenswrapper[4793]: I0217 20:19:20.096342 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" event={"ID":"a2358cc0-c59d-4feb-9682-b6dbfc729cd8","Type":"ContainerStarted","Data":"ac9dd03384589ca06e80272cc8b9d0fae6ed51a1a92588f3b14381244eacdae6"} Feb 17 20:19:20 crc kubenswrapper[4793]: I0217 20:19:20.097572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" event={"ID":"1321bdf1-43e6-45ce-8d74-332b6d81c908","Type":"ContainerStarted","Data":"b6f32ef46e9a557aaac0cf000d05176be7ff916499668255411d74de43f7ef39"} Feb 17 20:19:20 crc kubenswrapper[4793]: I0217 20:19:20.099041 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" event={"ID":"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73","Type":"ContainerStarted","Data":"b1778d9666a8bf420cbd3663f4ccc3c359e22e1414b78dade52e77cf359e6485"} Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.167320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" event={"ID":"1321bdf1-43e6-45ce-8d74-332b6d81c908","Type":"ContainerStarted","Data":"18fcf92f61c497f686d78d357e39b25f71223febfbe078f840105fb95604d61c"} Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.168828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" event={"ID":"ae7faf0a-a2b4-431b-8a63-4b890b5f5c73","Type":"ContainerStarted","Data":"aa7255ebf47c231e11131b81c709aa5e5e56e7ff54c1c92736a2daa310cc53be"} Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.170383 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" event={"ID":"ee3dbdac-0635-42f6-909e-3ff0ca3f48f7","Type":"ContainerStarted","Data":"e60eaf6b7295bb0fbaf4c0497edb97e8252b9071cbb6ceea67c08f72f0681573"} Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.171971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" event={"ID":"b19f5d08-b87f-4168-b29b-b28619987367","Type":"ContainerStarted","Data":"ca1aea76ee57a9f497748e3929bf1c6a4a3dfab59a6022334808732e08ea9749"} Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.172091 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.173877 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" event={"ID":"a2358cc0-c59d-4feb-9682-b6dbfc729cd8","Type":"ContainerStarted","Data":"8f88e5047a66a019f2fe52e04d78ea60d8da2aa088549a82f2435e2f667654e3"} Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.174082 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.176357 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.195025 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4" podStartSLOduration=2.160954792 podStartE2EDuration="11.194997983s" podCreationTimestamp="2026-02-17 20:19:18 +0000 UTC" firstStartedPulling="2026-02-17 20:19:19.336381925 +0000 UTC m=+634.628080236" lastFinishedPulling="2026-02-17 20:19:28.370425116 +0000 UTC m=+643.662123427" observedRunningTime="2026-02-17 20:19:29.189171259 +0000 UTC m=+644.480869590" watchObservedRunningTime="2026-02-17 20:19:29.194997983 +0000 UTC m=+644.486696304" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.227728 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" podStartSLOduration=2.638080962 podStartE2EDuration="11.227710803s" podCreationTimestamp="2026-02-17 20:19:18 +0000 UTC" firstStartedPulling="2026-02-17 20:19:19.781711377 +0000 UTC m=+635.073409688" lastFinishedPulling="2026-02-17 20:19:28.371341218 +0000 UTC m=+643.663039529" observedRunningTime="2026-02-17 20:19:29.224254228 +0000 UTC m=+644.515952549" watchObservedRunningTime="2026-02-17 20:19:29.227710803 +0000 UTC m=+644.519409114" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.256028 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt" podStartSLOduration=2.081509413 podStartE2EDuration="11.256007954s" podCreationTimestamp="2026-02-17 20:19:18 +0000 UTC" firstStartedPulling="2026-02-17 20:19:19.274396049 +0000 UTC m=+634.566094360" lastFinishedPulling="2026-02-17 20:19:28.4488946 +0000 UTC m=+643.740592901" observedRunningTime="2026-02-17 20:19:29.250356884 +0000 UTC m=+644.542055195" watchObservedRunningTime="2026-02-17 20:19:29.256007954 +0000 UTC m=+644.547706275" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.281083 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-zbtjj" podStartSLOduration=2.423702061 podStartE2EDuration="11.281068665s" podCreationTimestamp="2026-02-17 20:19:18 +0000 UTC" firstStartedPulling="2026-02-17 20:19:19.60825406 +0000 UTC m=+634.899952371" lastFinishedPulling="2026-02-17 20:19:28.465620664 +0000 UTC m=+643.757318975" observedRunningTime="2026-02-17 20:19:29.278780259 +0000 UTC m=+644.570478580" watchObservedRunningTime="2026-02-17 20:19:29.281068665 +0000 UTC m=+644.572766976" Feb 17 20:19:29 crc kubenswrapper[4793]: I0217 20:19:29.295162 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-stbxd" podStartSLOduration=2.109004205 podStartE2EDuration="11.295143384s" podCreationTimestamp="2026-02-17 20:19:18 +0000 UTC" firstStartedPulling="2026-02-17 20:19:19.18440378 +0000 UTC m=+634.476102091" lastFinishedPulling="2026-02-17 20:19:28.370542959 +0000 UTC m=+643.662241270" observedRunningTime="2026-02-17 20:19:29.293525744 +0000 UTC m=+644.585224065" watchObservedRunningTime="2026-02-17 20:19:29.295143384 +0000 UTC m=+644.586841705" Feb 17 20:19:39 crc kubenswrapper[4793]: I0217 20:19:39.322858 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.738154 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk"] Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.740561 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.743416 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.758390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk"] Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.831849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7sxz\" (UniqueName: \"kubernetes.io/projected/2db28c86-e6a9-42bd-a454-848e8460fd0c-kube-api-access-z7sxz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.831902 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.831927 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.940567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7sxz\" (UniqueName: \"kubernetes.io/projected/2db28c86-e6a9-42bd-a454-848e8460fd0c-kube-api-access-z7sxz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.940629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.940662 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.941440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.941440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:55 crc kubenswrapper[4793]: I0217 20:19:55.972117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7sxz\" (UniqueName: \"kubernetes.io/projected/2db28c86-e6a9-42bd-a454-848e8460fd0c-kube-api-access-z7sxz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:56 crc kubenswrapper[4793]: I0217 20:19:56.063837 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:19:56 crc kubenswrapper[4793]: I0217 20:19:56.253262 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk"] Feb 17 20:19:56 crc kubenswrapper[4793]: I0217 20:19:56.319025 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" event={"ID":"2db28c86-e6a9-42bd-a454-848e8460fd0c","Type":"ContainerStarted","Data":"ddbc28b58b1f0d86e4ca95f1009451b21160dfed34e33bf3396719a9f30a07b9"} Feb 17 20:19:57 crc kubenswrapper[4793]: I0217 20:19:57.328645 4793 generic.go:334] "Generic (PLEG): container finished" podID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerID="3c20cd084cee290813f039e2a67a33012ce0c320ce72858e1fa35fc729b20d3a" exitCode=0 Feb 17 20:19:57 crc kubenswrapper[4793]: I0217 20:19:57.328713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" event={"ID":"2db28c86-e6a9-42bd-a454-848e8460fd0c","Type":"ContainerDied","Data":"3c20cd084cee290813f039e2a67a33012ce0c320ce72858e1fa35fc729b20d3a"} Feb 17 20:19:59 crc kubenswrapper[4793]: I0217 20:19:59.342075 4793 generic.go:334] "Generic (PLEG): container finished" podID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerID="cb3d2c0080c4ed7d4e0aabd923bb21f7561eb11e5867908aaaaa603217abf26a" exitCode=0 Feb 17 20:19:59 crc kubenswrapper[4793]: I0217 20:19:59.342147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" event={"ID":"2db28c86-e6a9-42bd-a454-848e8460fd0c","Type":"ContainerDied","Data":"cb3d2c0080c4ed7d4e0aabd923bb21f7561eb11e5867908aaaaa603217abf26a"} Feb 17 20:20:00 crc kubenswrapper[4793]: I0217 20:20:00.351372 4793 generic.go:334] "Generic (PLEG): container finished" podID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerID="5bae25c8350a5553221162e476fc43f88b7051ce80d6d5075d8b71beaa483865" exitCode=0 Feb 17 20:20:00 crc kubenswrapper[4793]: I0217 20:20:00.351449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" event={"ID":"2db28c86-e6a9-42bd-a454-848e8460fd0c","Type":"ContainerDied","Data":"5bae25c8350a5553221162e476fc43f88b7051ce80d6d5075d8b71beaa483865"} Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.666278 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.813177 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-util\") pod \"2db28c86-e6a9-42bd-a454-848e8460fd0c\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.813288 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7sxz\" (UniqueName: \"kubernetes.io/projected/2db28c86-e6a9-42bd-a454-848e8460fd0c-kube-api-access-z7sxz\") pod \"2db28c86-e6a9-42bd-a454-848e8460fd0c\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.813320 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-bundle\") pod \"2db28c86-e6a9-42bd-a454-848e8460fd0c\" (UID: \"2db28c86-e6a9-42bd-a454-848e8460fd0c\") " Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.814127 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-bundle" (OuterVolumeSpecName: "bundle") pod "2db28c86-e6a9-42bd-a454-848e8460fd0c" (UID: "2db28c86-e6a9-42bd-a454-848e8460fd0c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.820554 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db28c86-e6a9-42bd-a454-848e8460fd0c-kube-api-access-z7sxz" (OuterVolumeSpecName: "kube-api-access-z7sxz") pod "2db28c86-e6a9-42bd-a454-848e8460fd0c" (UID: "2db28c86-e6a9-42bd-a454-848e8460fd0c"). InnerVolumeSpecName "kube-api-access-z7sxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.915280 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7sxz\" (UniqueName: \"kubernetes.io/projected/2db28c86-e6a9-42bd-a454-848e8460fd0c-kube-api-access-z7sxz\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:01 crc kubenswrapper[4793]: I0217 20:20:01.915319 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:02 crc kubenswrapper[4793]: I0217 20:20:02.099609 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-util" (OuterVolumeSpecName: "util") pod "2db28c86-e6a9-42bd-a454-848e8460fd0c" (UID: "2db28c86-e6a9-42bd-a454-848e8460fd0c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:20:02 crc kubenswrapper[4793]: I0217 20:20:02.118325 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2db28c86-e6a9-42bd-a454-848e8460fd0c-util\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:02 crc kubenswrapper[4793]: I0217 20:20:02.369452 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" event={"ID":"2db28c86-e6a9-42bd-a454-848e8460fd0c","Type":"ContainerDied","Data":"ddbc28b58b1f0d86e4ca95f1009451b21160dfed34e33bf3396719a9f30a07b9"} Feb 17 20:20:02 crc kubenswrapper[4793]: I0217 20:20:02.369502 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddbc28b58b1f0d86e4ca95f1009451b21160dfed34e33bf3396719a9f30a07b9" Feb 17 20:20:02 crc kubenswrapper[4793]: I0217 20:20:02.369564 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.320280 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-7v452"] Feb 17 20:20:07 crc kubenswrapper[4793]: E0217 20:20:07.320532 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="util" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.320547 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="util" Feb 17 20:20:07 crc kubenswrapper[4793]: E0217 20:20:07.320559 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="extract" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.320565 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="extract" Feb 17 20:20:07 crc kubenswrapper[4793]: E0217 20:20:07.320587 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="pull" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.320596 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="pull" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.320724 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db28c86-e6a9-42bd-a454-848e8460fd0c" containerName="extract" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.321160 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.324815 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.324809 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.335062 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lj2v5" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.340336 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-7v452"] Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.384330 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl5sm\" (UniqueName: \"kubernetes.io/projected/43cb2714-f3fe-442d-8a64-37f7c77bdb3b-kube-api-access-pl5sm\") pod \"nmstate-operator-694c9596b7-7v452\" (UID: \"43cb2714-f3fe-442d-8a64-37f7c77bdb3b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.485336 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl5sm\" (UniqueName: \"kubernetes.io/projected/43cb2714-f3fe-442d-8a64-37f7c77bdb3b-kube-api-access-pl5sm\") pod \"nmstate-operator-694c9596b7-7v452\" (UID: \"43cb2714-f3fe-442d-8a64-37f7c77bdb3b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.508553 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl5sm\" (UniqueName: \"kubernetes.io/projected/43cb2714-f3fe-442d-8a64-37f7c77bdb3b-kube-api-access-pl5sm\") pod \"nmstate-operator-694c9596b7-7v452\" (UID: \"43cb2714-f3fe-442d-8a64-37f7c77bdb3b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.648532 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" Feb 17 20:20:07 crc kubenswrapper[4793]: I0217 20:20:07.862271 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-7v452"] Feb 17 20:20:08 crc kubenswrapper[4793]: I0217 20:20:08.402133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" event={"ID":"43cb2714-f3fe-442d-8a64-37f7c77bdb3b","Type":"ContainerStarted","Data":"3692b77f25e193bb2b3b713df97ca7d8dd43435fe1de25b3a339641bfc234854"} Feb 17 20:20:10 crc kubenswrapper[4793]: I0217 20:20:10.420133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" event={"ID":"43cb2714-f3fe-442d-8a64-37f7c77bdb3b","Type":"ContainerStarted","Data":"3df055f9e7157c80a3891f9e50274510fc27cd86719a11312f1272a36c543e66"} Feb 17 20:20:10 crc kubenswrapper[4793]: I0217 20:20:10.443007 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-7v452" podStartSLOduration=1.492257795 podStartE2EDuration="3.442982963s" podCreationTimestamp="2026-02-17 20:20:07 +0000 UTC" firstStartedPulling="2026-02-17 20:20:07.873314963 +0000 UTC m=+683.165013274" lastFinishedPulling="2026-02-17 20:20:09.824040121 +0000 UTC m=+685.115738442" observedRunningTime="2026-02-17 20:20:10.43834277 +0000 UTC m=+685.730041121" watchObservedRunningTime="2026-02-17 20:20:10.442982963 +0000 UTC m=+685.734681314" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.017145 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-84b4b"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.019580 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.022320 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-qfmb7" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.027315 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-84b4b"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.032924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbn9\" (UniqueName: \"kubernetes.io/projected/3b8a3433-034d-44e7-bb2d-347120fd762a-kube-api-access-hzbn9\") pod \"nmstate-metrics-58c85c668d-84b4b\" (UID: \"3b8a3433-034d-44e7-bb2d-347120fd762a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.052970 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-smppl"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.053750 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.056140 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.069565 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pjstr"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.070497 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.086445 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-smppl"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.140590 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-nmstate-lock\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.140651 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7hn\" (UniqueName: \"kubernetes.io/projected/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-kube-api-access-6h7hn\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.140715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-dbus-socket\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.140799 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswn2\" (UniqueName: \"kubernetes.io/projected/6316cf0c-93bc-4219-9db0-0e81b81b8add-kube-api-access-wswn2\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.140845 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzbn9\" (UniqueName: \"kubernetes.io/projected/3b8a3433-034d-44e7-bb2d-347120fd762a-kube-api-access-hzbn9\") pod \"nmstate-metrics-58c85c668d-84b4b\" (UID: \"3b8a3433-034d-44e7-bb2d-347120fd762a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.140943 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-ovs-socket\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.141004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.166210 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.166938 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.179276 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.179447 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.179588 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-sqmst" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.184903 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzbn9\" (UniqueName: \"kubernetes.io/projected/3b8a3433-034d-44e7-bb2d-347120fd762a-kube-api-access-hzbn9\") pod \"nmstate-metrics-58c85c668d-84b4b\" (UID: \"3b8a3433-034d-44e7-bb2d-347120fd762a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.191291 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfg6f\" (UniqueName: \"kubernetes.io/projected/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-kube-api-access-pfg6f\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244101 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-nmstate-lock\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7hn\" (UniqueName: \"kubernetes.io/projected/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-kube-api-access-6h7hn\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244177 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-dbus-socket\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244198 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswn2\" (UniqueName: \"kubernetes.io/projected/6316cf0c-93bc-4219-9db0-0e81b81b8add-kube-api-access-wswn2\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244222 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244253 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-ovs-socket\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244304 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244430 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-nmstate-lock\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.244923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-dbus-socket\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.245111 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6316cf0c-93bc-4219-9db0-0e81b81b8add-ovs-socket\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: E0217 20:20:17.245184 4793 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 20:20:17 crc kubenswrapper[4793]: E0217 20:20:17.245233 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-tls-key-pair podName:2db5cb32-7490-4b5f-b6c8-11db2f5c7d04 nodeName:}" failed. No retries permitted until 2026-02-17 20:20:17.745216969 +0000 UTC m=+693.036915280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-tls-key-pair") pod "nmstate-webhook-866bcb46dc-smppl" (UID: "2db5cb32-7490-4b5f-b6c8-11db2f5c7d04") : secret "openshift-nmstate-webhook" not found Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.259376 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswn2\" (UniqueName: \"kubernetes.io/projected/6316cf0c-93bc-4219-9db0-0e81b81b8add-kube-api-access-wswn2\") pod \"nmstate-handler-pjstr\" (UID: \"6316cf0c-93bc-4219-9db0-0e81b81b8add\") " pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.262508 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7hn\" (UniqueName: \"kubernetes.io/projected/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-kube-api-access-6h7hn\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.344793 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.345308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.345376 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfg6f\" (UniqueName: \"kubernetes.io/projected/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-kube-api-access-pfg6f\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.345432 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.346323 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.360128 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.366810 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dfdb86557-khhpp"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.367543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.378725 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dfdb86557-khhpp"] Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.386481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfg6f\" (UniqueName: \"kubernetes.io/projected/790088e3-c7ca-480c-883d-c3b9e2b4c8c8-kube-api-access-pfg6f\") pod \"nmstate-console-plugin-5c78fc5d65-n9dsm\" (UID: \"790088e3-c7ca-480c-883d-c3b9e2b4c8c8\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.389282 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-service-ca\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-trusted-ca-bundle\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446511 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-config\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446576 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56v9s\" (UniqueName: \"kubernetes.io/projected/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-kube-api-access-56v9s\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-oauth-config\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446781 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-serving-cert\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.446836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-oauth-serving-cert\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.468987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pjstr" event={"ID":"6316cf0c-93bc-4219-9db0-0e81b81b8add","Type":"ContainerStarted","Data":"3a73cf7657c14ee64ca09c04e8d01e812bcd55b956cfa101f56d1e5a2ab312ac"} Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.499159 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548495 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56v9s\" (UniqueName: \"kubernetes.io/projected/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-kube-api-access-56v9s\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548545 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-oauth-config\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548570 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-serving-cert\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-oauth-serving-cert\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548639 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-service-ca\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548659 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-trusted-ca-bundle\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.548675 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-config\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.551083 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-config\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.553502 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-service-ca\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.554992 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-trusted-ca-bundle\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.555200 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-oauth-serving-cert\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.556467 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-serving-cert\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.561149 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-console-oauth-config\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.568584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56v9s\" (UniqueName: \"kubernetes.io/projected/0e2a3b53-57d0-459c-9dc0-f00f58bfdaad-kube-api-access-56v9s\") pod \"console-5dfdb86557-khhpp\" (UID: \"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad\") " pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.697975 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm"] Feb 17 20:20:17 crc kubenswrapper[4793]: W0217 20:20:17.703012 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod790088e3_c7ca_480c_883d_c3b9e2b4c8c8.slice/crio-c4481e5cd58e48e6936d2aa6a4efbb4ed206d8151616c379c48fe9385b867bd8 WatchSource:0}: Error finding container c4481e5cd58e48e6936d2aa6a4efbb4ed206d8151616c379c48fe9385b867bd8: Status 404 returned error can't find the container with id c4481e5cd58e48e6936d2aa6a4efbb4ed206d8151616c379c48fe9385b867bd8 Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.728645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.751280 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.759300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2db5cb32-7490-4b5f-b6c8-11db2f5c7d04-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-smppl\" (UID: \"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.786587 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-84b4b"] Feb 17 20:20:17 crc kubenswrapper[4793]: W0217 20:20:17.797441 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b8a3433_034d_44e7_bb2d_347120fd762a.slice/crio-72ac05c714661094c0fd92abf9ac6027dbea2b70c59a0130afcf03e68bc4cb43 WatchSource:0}: Error finding container 72ac05c714661094c0fd92abf9ac6027dbea2b70c59a0130afcf03e68bc4cb43: Status 404 returned error can't find the container with id 72ac05c714661094c0fd92abf9ac6027dbea2b70c59a0130afcf03e68bc4cb43 Feb 17 20:20:17 crc kubenswrapper[4793]: I0217 20:20:17.967982 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.132760 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dfdb86557-khhpp"] Feb 17 20:20:18 crc kubenswrapper[4793]: W0217 20:20:18.154742 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2a3b53_57d0_459c_9dc0_f00f58bfdaad.slice/crio-bebabd0fd0cf25aede9a1f727f4486ee6aad47ba49ad6f53cb5b21c8d0382599 WatchSource:0}: Error finding container bebabd0fd0cf25aede9a1f727f4486ee6aad47ba49ad6f53cb5b21c8d0382599: Status 404 returned error can't find the container with id bebabd0fd0cf25aede9a1f727f4486ee6aad47ba49ad6f53cb5b21c8d0382599 Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.174474 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-smppl"] Feb 17 20:20:18 crc kubenswrapper[4793]: W0217 20:20:18.183413 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db5cb32_7490_4b5f_b6c8_11db2f5c7d04.slice/crio-9c98d813d345513f90808a425a512e747933b65ea1e35c43bd5046303c56b03c WatchSource:0}: Error finding container 9c98d813d345513f90808a425a512e747933b65ea1e35c43bd5046303c56b03c: Status 404 returned error can't find the container with id 9c98d813d345513f90808a425a512e747933b65ea1e35c43bd5046303c56b03c Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.477529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" event={"ID":"790088e3-c7ca-480c-883d-c3b9e2b4c8c8","Type":"ContainerStarted","Data":"c4481e5cd58e48e6936d2aa6a4efbb4ed206d8151616c379c48fe9385b867bd8"} Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.478858 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" event={"ID":"3b8a3433-034d-44e7-bb2d-347120fd762a","Type":"ContainerStarted","Data":"72ac05c714661094c0fd92abf9ac6027dbea2b70c59a0130afcf03e68bc4cb43"} Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.479959 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" event={"ID":"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04","Type":"ContainerStarted","Data":"9c98d813d345513f90808a425a512e747933b65ea1e35c43bd5046303c56b03c"} Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.481730 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dfdb86557-khhpp" event={"ID":"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad","Type":"ContainerStarted","Data":"a34aec797ad8967a58100be5626eef8771a38b4b5a8b26eaec3e556a6fec1871"} Feb 17 20:20:18 crc kubenswrapper[4793]: I0217 20:20:18.481775 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dfdb86557-khhpp" event={"ID":"0e2a3b53-57d0-459c-9dc0-f00f58bfdaad","Type":"ContainerStarted","Data":"bebabd0fd0cf25aede9a1f727f4486ee6aad47ba49ad6f53cb5b21c8d0382599"} Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.102076 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.102476 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.501126 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" event={"ID":"3b8a3433-034d-44e7-bb2d-347120fd762a","Type":"ContainerStarted","Data":"9303aa6a85a7a9778b6a211b9121a5ebe0d5d49cedf152645629303501c59208"} Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.504147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" event={"ID":"2db5cb32-7490-4b5f-b6c8-11db2f5c7d04","Type":"ContainerStarted","Data":"ddb56ba57a5a5bc388869353c688bf81ebcb539c8ee063d68c354df30fa6e303"} Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.504331 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.525109 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" podStartSLOduration=1.4092888399999999 podStartE2EDuration="3.525087816s" podCreationTimestamp="2026-02-17 20:20:17 +0000 UTC" firstStartedPulling="2026-02-17 20:20:18.186024317 +0000 UTC m=+693.477722628" lastFinishedPulling="2026-02-17 20:20:20.301766212 +0000 UTC m=+695.593521604" observedRunningTime="2026-02-17 20:20:20.520729149 +0000 UTC m=+695.812427460" watchObservedRunningTime="2026-02-17 20:20:20.525087816 +0000 UTC m=+695.816786127" Feb 17 20:20:20 crc kubenswrapper[4793]: I0217 20:20:20.525916 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dfdb86557-khhpp" podStartSLOduration=3.525909936 podStartE2EDuration="3.525909936s" podCreationTimestamp="2026-02-17 20:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:20:18.503234718 +0000 UTC m=+693.794933079" watchObservedRunningTime="2026-02-17 20:20:20.525909936 +0000 UTC m=+695.817608247" Feb 17 20:20:21 crc kubenswrapper[4793]: I0217 20:20:21.513718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" event={"ID":"790088e3-c7ca-480c-883d-c3b9e2b4c8c8","Type":"ContainerStarted","Data":"e24dad113d03151955afd63c16c912b5ea7c93173b49bb3ca903a4deaa76d454"} Feb 17 20:20:21 crc kubenswrapper[4793]: I0217 20:20:21.515999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pjstr" event={"ID":"6316cf0c-93bc-4219-9db0-0e81b81b8add","Type":"ContainerStarted","Data":"4339db852e6df83acc7670f59de95330041dd237f275b090a52f8ec792cd7f08"} Feb 17 20:20:21 crc kubenswrapper[4793]: I0217 20:20:21.516255 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:21 crc kubenswrapper[4793]: I0217 20:20:21.530722 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-n9dsm" podStartSLOduration=0.989299164 podStartE2EDuration="4.530703659s" podCreationTimestamp="2026-02-17 20:20:17 +0000 UTC" firstStartedPulling="2026-02-17 20:20:17.704519446 +0000 UTC m=+692.996217757" lastFinishedPulling="2026-02-17 20:20:21.245923941 +0000 UTC m=+696.537622252" observedRunningTime="2026-02-17 20:20:21.52788167 +0000 UTC m=+696.819579991" watchObservedRunningTime="2026-02-17 20:20:21.530703659 +0000 UTC m=+696.822401970" Feb 17 20:20:23 crc kubenswrapper[4793]: I0217 20:20:23.533433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" event={"ID":"3b8a3433-034d-44e7-bb2d-347120fd762a","Type":"ContainerStarted","Data":"3e4239d05cdef6e108ddeaa3a26662edbb8059358146c0064548ec9e329c733b"} Feb 17 20:20:23 crc kubenswrapper[4793]: I0217 20:20:23.555116 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pjstr" podStartSLOduration=3.7223709339999997 podStartE2EDuration="6.55508923s" podCreationTimestamp="2026-02-17 20:20:17 +0000 UTC" firstStartedPulling="2026-02-17 20:20:17.415992928 +0000 UTC m=+692.707691229" lastFinishedPulling="2026-02-17 20:20:20.248711214 +0000 UTC m=+695.540409525" observedRunningTime="2026-02-17 20:20:21.551731063 +0000 UTC m=+696.843429364" watchObservedRunningTime="2026-02-17 20:20:23.55508923 +0000 UTC m=+698.846787561" Feb 17 20:20:23 crc kubenswrapper[4793]: I0217 20:20:23.558831 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-84b4b" podStartSLOduration=2.987264751 podStartE2EDuration="7.558806271s" podCreationTimestamp="2026-02-17 20:20:16 +0000 UTC" firstStartedPulling="2026-02-17 20:20:17.799453619 +0000 UTC m=+693.091151920" lastFinishedPulling="2026-02-17 20:20:22.370995129 +0000 UTC m=+697.662693440" observedRunningTime="2026-02-17 20:20:23.552972608 +0000 UTC m=+698.844670959" watchObservedRunningTime="2026-02-17 20:20:23.558806271 +0000 UTC m=+698.850504592" Feb 17 20:20:27 crc kubenswrapper[4793]: I0217 20:20:27.427036 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pjstr" Feb 17 20:20:27 crc kubenswrapper[4793]: I0217 20:20:27.729737 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:27 crc kubenswrapper[4793]: I0217 20:20:27.729965 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:27 crc kubenswrapper[4793]: I0217 20:20:27.735851 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:28 crc kubenswrapper[4793]: I0217 20:20:28.576953 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dfdb86557-khhpp" Feb 17 20:20:28 crc kubenswrapper[4793]: I0217 20:20:28.658767 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-br8vj"] Feb 17 20:20:37 crc kubenswrapper[4793]: I0217 20:20:37.978659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-smppl" Feb 17 20:20:50 crc kubenswrapper[4793]: I0217 20:20:50.102276 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:20:50 crc kubenswrapper[4793]: I0217 20:20:50.103286 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:20:53 crc kubenswrapper[4793]: I0217 20:20:53.708056 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-br8vj" podUID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" containerName="console" containerID="cri-o://329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b" gracePeriod=15 Feb 17 20:20:53 crc kubenswrapper[4793]: I0217 20:20:53.964372 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp"] Feb 17 20:20:53 crc kubenswrapper[4793]: I0217 20:20:53.965886 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:53 crc kubenswrapper[4793]: I0217 20:20:53.971596 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 20:20:53 crc kubenswrapper[4793]: I0217 20:20:53.980981 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp"] Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.080577 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.080844 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfcbt\" (UniqueName: \"kubernetes.io/projected/68641a11-d218-43ed-8060-f78590d33051-kube-api-access-xfcbt\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.080918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.182187 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.182248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfcbt\" (UniqueName: \"kubernetes.io/projected/68641a11-d218-43ed-8060-f78590d33051-kube-api-access-xfcbt\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.182281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.182944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.182942 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.206574 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-br8vj_4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c/console/0.log" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.206647 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.207845 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfcbt\" (UniqueName: \"kubernetes.io/projected/68641a11-d218-43ed-8060-f78590d33051-kube-api-access-xfcbt\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283194 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-config\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283235 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-oauth-serving-cert\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283270 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-trusted-ca-bundle\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283308 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjwws\" (UniqueName: \"kubernetes.io/projected/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-kube-api-access-xjwws\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-serving-cert\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283347 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-oauth-config\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.283371 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-service-ca\") pod \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\" (UID: \"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c\") " Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.284293 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-service-ca" (OuterVolumeSpecName: "service-ca") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.284515 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-config" (OuterVolumeSpecName: "console-config") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.284720 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.284937 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.289456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.290658 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-kube-api-access-xjwws" (OuterVolumeSpecName: "kube-api-access-xjwws") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "kube-api-access-xjwws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.294314 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.294840 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" (UID: "4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384544 4793 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384575 4793 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384588 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjwws\" (UniqueName: \"kubernetes.io/projected/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-kube-api-access-xjwws\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384600 4793 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384612 4793 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384623 4793 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.384636 4793 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.721805 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp"] Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.752052 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" event={"ID":"68641a11-d218-43ed-8060-f78590d33051","Type":"ContainerStarted","Data":"c6ee325bbc66c029592aeaee8b139ea2acb73a81a69901522193fd31f76fa3f6"} Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.753617 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-br8vj_4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c/console/0.log" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.753937 4793 generic.go:334] "Generic (PLEG): container finished" podID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" containerID="329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b" exitCode=2 Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.753972 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-br8vj" event={"ID":"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c","Type":"ContainerDied","Data":"329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b"} Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.754003 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-br8vj" event={"ID":"4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c","Type":"ContainerDied","Data":"6780f18ba40a8c3066c5a351a7776d6ae3ecd302d022c6f2564b79dc77ea9a86"} Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.754023 4793 scope.go:117] "RemoveContainer" containerID="329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.754156 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-br8vj" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.800457 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-br8vj"] Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.805945 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-br8vj"] Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.809536 4793 scope.go:117] "RemoveContainer" containerID="329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b" Feb 17 20:20:54 crc kubenswrapper[4793]: E0217 20:20:54.810121 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b\": container with ID starting with 329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b not found: ID does not exist" containerID="329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b" Feb 17 20:20:54 crc kubenswrapper[4793]: I0217 20:20:54.810171 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b"} err="failed to get container status \"329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b\": rpc error: code = NotFound desc = could not find container \"329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b\": container with ID starting with 329cb1efb2dc0b613b420bc0c18994a58200e3c9f4839c70829125dabbceef6b not found: ID does not exist" Feb 17 20:20:55 crc kubenswrapper[4793]: I0217 20:20:55.547217 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" path="/var/lib/kubelet/pods/4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c/volumes" Feb 17 20:20:55 crc kubenswrapper[4793]: I0217 20:20:55.761408 4793 generic.go:334] "Generic (PLEG): container finished" podID="68641a11-d218-43ed-8060-f78590d33051" containerID="5e569fba03166957ca4caf1b072e205fa52898e5a3c376e5ae1639592a48a99b" exitCode=0 Feb 17 20:20:55 crc kubenswrapper[4793]: I0217 20:20:55.761454 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" event={"ID":"68641a11-d218-43ed-8060-f78590d33051","Type":"ContainerDied","Data":"5e569fba03166957ca4caf1b072e205fa52898e5a3c376e5ae1639592a48a99b"} Feb 17 20:20:57 crc kubenswrapper[4793]: I0217 20:20:57.784067 4793 generic.go:334] "Generic (PLEG): container finished" podID="68641a11-d218-43ed-8060-f78590d33051" containerID="360127bc4b3ec616dba9624012913012870dc8a1f3416d0059f577b376d23e80" exitCode=0 Feb 17 20:20:57 crc kubenswrapper[4793]: I0217 20:20:57.784141 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" event={"ID":"68641a11-d218-43ed-8060-f78590d33051","Type":"ContainerDied","Data":"360127bc4b3ec616dba9624012913012870dc8a1f3416d0059f577b376d23e80"} Feb 17 20:20:58 crc kubenswrapper[4793]: I0217 20:20:58.796505 4793 generic.go:334] "Generic (PLEG): container finished" podID="68641a11-d218-43ed-8060-f78590d33051" containerID="0185a66d0a4ef4de3fc8864a83094df7d1271edb9b47dcdd5744d76a6cab85bf" exitCode=0 Feb 17 20:20:58 crc kubenswrapper[4793]: I0217 20:20:58.796579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" event={"ID":"68641a11-d218-43ed-8060-f78590d33051","Type":"ContainerDied","Data":"0185a66d0a4ef4de3fc8864a83094df7d1271edb9b47dcdd5744d76a6cab85bf"} Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.084516 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.163210 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfcbt\" (UniqueName: \"kubernetes.io/projected/68641a11-d218-43ed-8060-f78590d33051-kube-api-access-xfcbt\") pod \"68641a11-d218-43ed-8060-f78590d33051\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.163306 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-bundle\") pod \"68641a11-d218-43ed-8060-f78590d33051\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.163432 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-util\") pod \"68641a11-d218-43ed-8060-f78590d33051\" (UID: \"68641a11-d218-43ed-8060-f78590d33051\") " Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.164657 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-bundle" (OuterVolumeSpecName: "bundle") pod "68641a11-d218-43ed-8060-f78590d33051" (UID: "68641a11-d218-43ed-8060-f78590d33051"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.174981 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68641a11-d218-43ed-8060-f78590d33051-kube-api-access-xfcbt" (OuterVolumeSpecName: "kube-api-access-xfcbt") pod "68641a11-d218-43ed-8060-f78590d33051" (UID: "68641a11-d218-43ed-8060-f78590d33051"). InnerVolumeSpecName "kube-api-access-xfcbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.265101 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfcbt\" (UniqueName: \"kubernetes.io/projected/68641a11-d218-43ed-8060-f78590d33051-kube-api-access-xfcbt\") on node \"crc\" DevicePath \"\"" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.265152 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.445059 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-util" (OuterVolumeSpecName: "util") pod "68641a11-d218-43ed-8060-f78590d33051" (UID: "68641a11-d218-43ed-8060-f78590d33051"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.468433 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68641a11-d218-43ed-8060-f78590d33051-util\") on node \"crc\" DevicePath \"\"" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.818050 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" event={"ID":"68641a11-d218-43ed-8060-f78590d33051","Type":"ContainerDied","Data":"c6ee325bbc66c029592aeaee8b139ea2acb73a81a69901522193fd31f76fa3f6"} Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.818114 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ee325bbc66c029592aeaee8b139ea2acb73a81a69901522193fd31f76fa3f6" Feb 17 20:21:00 crc kubenswrapper[4793]: I0217 20:21:00.818152 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.215777 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429"] Feb 17 20:21:10 crc kubenswrapper[4793]: E0217 20:21:10.216575 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" containerName="console" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.216590 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" containerName="console" Feb 17 20:21:10 crc kubenswrapper[4793]: E0217 20:21:10.216610 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="util" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.216620 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="util" Feb 17 20:21:10 crc kubenswrapper[4793]: E0217 20:21:10.216630 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="extract" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.216639 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="extract" Feb 17 20:21:10 crc kubenswrapper[4793]: E0217 20:21:10.216658 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="pull" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.216665 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="pull" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.216816 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3c90c1-51f5-49d1-afc4-3e2227ca1f6c" containerName="console" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.216834 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="68641a11-d218-43ed-8060-f78590d33051" containerName="extract" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.217311 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.219115 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.219392 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.219411 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.219839 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dls96" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.221307 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.243018 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429"] Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.295897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c30e939-99b4-4e6a-88d5-1d4149921d6d-webhook-cert\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.296132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h4w\" (UniqueName: \"kubernetes.io/projected/9c30e939-99b4-4e6a-88d5-1d4149921d6d-kube-api-access-k9h4w\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.296208 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c30e939-99b4-4e6a-88d5-1d4149921d6d-apiservice-cert\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.397369 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9h4w\" (UniqueName: \"kubernetes.io/projected/9c30e939-99b4-4e6a-88d5-1d4149921d6d-kube-api-access-k9h4w\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.397418 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c30e939-99b4-4e6a-88d5-1d4149921d6d-apiservice-cert\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.397940 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c30e939-99b4-4e6a-88d5-1d4149921d6d-webhook-cert\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.405592 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c30e939-99b4-4e6a-88d5-1d4149921d6d-webhook-cert\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.405638 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c30e939-99b4-4e6a-88d5-1d4149921d6d-apiservice-cert\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.417165 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9h4w\" (UniqueName: \"kubernetes.io/projected/9c30e939-99b4-4e6a-88d5-1d4149921d6d-kube-api-access-k9h4w\") pod \"metallb-operator-controller-manager-5468c6fb5b-mh429\" (UID: \"9c30e939-99b4-4e6a-88d5-1d4149921d6d\") " pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.437596 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq"] Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.438584 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.441242 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.441308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wsljs" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.441424 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.460041 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq"] Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.498749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c340308-e5a2-40db-855c-4dab55f7f028-apiservice-cert\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.498869 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzp4x\" (UniqueName: \"kubernetes.io/projected/7c340308-e5a2-40db-855c-4dab55f7f028-kube-api-access-lzp4x\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.498947 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c340308-e5a2-40db-855c-4dab55f7f028-webhook-cert\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.538674 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.600356 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c340308-e5a2-40db-855c-4dab55f7f028-webhook-cert\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.600421 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c340308-e5a2-40db-855c-4dab55f7f028-apiservice-cert\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.600472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzp4x\" (UniqueName: \"kubernetes.io/projected/7c340308-e5a2-40db-855c-4dab55f7f028-kube-api-access-lzp4x\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.604031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c340308-e5a2-40db-855c-4dab55f7f028-webhook-cert\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.608271 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c340308-e5a2-40db-855c-4dab55f7f028-apiservice-cert\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.631259 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzp4x\" (UniqueName: \"kubernetes.io/projected/7c340308-e5a2-40db-855c-4dab55f7f028-kube-api-access-lzp4x\") pod \"metallb-operator-webhook-server-789f66767d-5rsvq\" (UID: \"7c340308-e5a2-40db-855c-4dab55f7f028\") " pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:10 crc kubenswrapper[4793]: I0217 20:21:10.768812 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:11 crc kubenswrapper[4793]: I0217 20:21:11.036842 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq"] Feb 17 20:21:11 crc kubenswrapper[4793]: W0217 20:21:11.046359 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c340308_e5a2_40db_855c_4dab55f7f028.slice/crio-e94ea3016a39ce47e5e8e70d1f87630b6900a53cf6f7b6936144d1ae85bf20db WatchSource:0}: Error finding container e94ea3016a39ce47e5e8e70d1f87630b6900a53cf6f7b6936144d1ae85bf20db: Status 404 returned error can't find the container with id e94ea3016a39ce47e5e8e70d1f87630b6900a53cf6f7b6936144d1ae85bf20db Feb 17 20:21:11 crc kubenswrapper[4793]: I0217 20:21:11.047915 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429"] Feb 17 20:21:11 crc kubenswrapper[4793]: W0217 20:21:11.065844 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c30e939_99b4_4e6a_88d5_1d4149921d6d.slice/crio-a614745f7e31f118531a784a5e18e78eff05219515ad1788139ccf81ee3b7574 WatchSource:0}: Error finding container a614745f7e31f118531a784a5e18e78eff05219515ad1788139ccf81ee3b7574: Status 404 returned error can't find the container with id a614745f7e31f118531a784a5e18e78eff05219515ad1788139ccf81ee3b7574 Feb 17 20:21:11 crc kubenswrapper[4793]: I0217 20:21:11.880638 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" event={"ID":"7c340308-e5a2-40db-855c-4dab55f7f028","Type":"ContainerStarted","Data":"e94ea3016a39ce47e5e8e70d1f87630b6900a53cf6f7b6936144d1ae85bf20db"} Feb 17 20:21:11 crc kubenswrapper[4793]: I0217 20:21:11.882591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" event={"ID":"9c30e939-99b4-4e6a-88d5-1d4149921d6d","Type":"ContainerStarted","Data":"a614745f7e31f118531a784a5e18e78eff05219515ad1788139ccf81ee3b7574"} Feb 17 20:21:15 crc kubenswrapper[4793]: I0217 20:21:15.905351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" event={"ID":"9c30e939-99b4-4e6a-88d5-1d4149921d6d","Type":"ContainerStarted","Data":"b1ddf028f7fbc439165d4c0606067ae636e1285d8ff5db9184a9983d1f3d7946"} Feb 17 20:21:15 crc kubenswrapper[4793]: I0217 20:21:15.906950 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" event={"ID":"7c340308-e5a2-40db-855c-4dab55f7f028","Type":"ContainerStarted","Data":"1c9a5f2a9a02120b35627d0c30a014b4eb40511d2e73d6bf769e426437823b9e"} Feb 17 20:21:15 crc kubenswrapper[4793]: I0217 20:21:15.907065 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:15 crc kubenswrapper[4793]: I0217 20:21:15.907083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:15 crc kubenswrapper[4793]: I0217 20:21:15.929318 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" podStartSLOduration=1.727754509 podStartE2EDuration="5.929299615s" podCreationTimestamp="2026-02-17 20:21:10 +0000 UTC" firstStartedPulling="2026-02-17 20:21:11.069815257 +0000 UTC m=+746.361513568" lastFinishedPulling="2026-02-17 20:21:15.271360363 +0000 UTC m=+750.563058674" observedRunningTime="2026-02-17 20:21:15.920511324 +0000 UTC m=+751.212209635" watchObservedRunningTime="2026-02-17 20:21:15.929299615 +0000 UTC m=+751.220997926" Feb 17 20:21:15 crc kubenswrapper[4793]: I0217 20:21:15.940347 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" podStartSLOduration=1.703065416 podStartE2EDuration="5.940327249s" podCreationTimestamp="2026-02-17 20:21:10 +0000 UTC" firstStartedPulling="2026-02-17 20:21:11.05329169 +0000 UTC m=+746.344990001" lastFinishedPulling="2026-02-17 20:21:15.290553523 +0000 UTC m=+750.582251834" observedRunningTime="2026-02-17 20:21:15.9370232 +0000 UTC m=+751.228721511" watchObservedRunningTime="2026-02-17 20:21:15.940327249 +0000 UTC m=+751.232025560" Feb 17 20:21:19 crc kubenswrapper[4793]: I0217 20:21:19.765025 4793 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.102080 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.102134 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.102172 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.102724 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6dd2a040783fcbbb25effeb44da963123b5a74b9553a93ac26a3ede471dffb7"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.102788 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://c6dd2a040783fcbbb25effeb44da963123b5a74b9553a93ac26a3ede471dffb7" gracePeriod=600 Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.937078 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="c6dd2a040783fcbbb25effeb44da963123b5a74b9553a93ac26a3ede471dffb7" exitCode=0 Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.937155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"c6dd2a040783fcbbb25effeb44da963123b5a74b9553a93ac26a3ede471dffb7"} Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.937380 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"8233173cc6f0085dcde5889ba083171fcabff8c918ff96d6ffa28816106b888f"} Feb 17 20:21:20 crc kubenswrapper[4793]: I0217 20:21:20.937401 4793 scope.go:117] "RemoveContainer" containerID="b2f41059818507a8af04391a12fd3b72c25fff60681b12b730e9f673d56662ed" Feb 17 20:21:30 crc kubenswrapper[4793]: I0217 20:21:30.774769 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-789f66767d-5rsvq" Feb 17 20:21:50 crc kubenswrapper[4793]: I0217 20:21:50.542736 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5468c6fb5b-mh429" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.371636 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp"] Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.372540 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.378791 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-s8srb" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.383929 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.384822 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2qxkk"] Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.387743 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.389068 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.389388 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.390613 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp"] Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.454994 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5qqnc"] Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.455869 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.457623 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pzjj4" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.457902 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.459925 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.460150 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.469624 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jxxkv"] Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.470822 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.472837 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.490640 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jxxkv"] Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.554678 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e40cbcec-a6c5-40c2-9e5b-651065d296fc-metrics-certs\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.554727 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-reloader\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.554746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2t5l\" (UniqueName: \"kubernetes.io/projected/5c5f62ed-1a04-409e-93ca-5c8631fe51f4-kube-api-access-l2t5l\") pod \"frr-k8s-webhook-server-78b44bf5bb-hs2fp\" (UID: \"5c5f62ed-1a04-409e-93ca-5c8631fe51f4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.554878 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-startup\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.555194 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-conf\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.555214 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-metrics\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.555229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-sockets\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.555244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4sj5\" (UniqueName: \"kubernetes.io/projected/e40cbcec-a6c5-40c2-9e5b-651065d296fc-kube-api-access-r4sj5\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.555270 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c5f62ed-1a04-409e-93ca-5c8631fe51f4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hs2fp\" (UID: \"5c5f62ed-1a04-409e-93ca-5c8631fe51f4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-startup\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-metrics-certs\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656453 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-cert\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656491 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656514 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-conf\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656542 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-metrics\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656565 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-sockets\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4sj5\" (UniqueName: \"kubernetes.io/projected/e40cbcec-a6c5-40c2-9e5b-651065d296fc-kube-api-access-r4sj5\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656615 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9phl\" (UniqueName: \"kubernetes.io/projected/a9138cd5-387e-43a3-bea6-17eb931827ef-kube-api-access-h9phl\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c5f62ed-1a04-409e-93ca-5c8631fe51f4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hs2fp\" (UID: \"5c5f62ed-1a04-409e-93ca-5c8631fe51f4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656672 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-metrics-certs\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e40cbcec-a6c5-40c2-9e5b-651065d296fc-metrics-certs\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656730 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-reloader\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656749 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2t5l\" (UniqueName: \"kubernetes.io/projected/5c5f62ed-1a04-409e-93ca-5c8631fe51f4-kube-api-access-l2t5l\") pod \"frr-k8s-webhook-server-78b44bf5bb-hs2fp\" (UID: \"5c5f62ed-1a04-409e-93ca-5c8631fe51f4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhlj\" (UniqueName: \"kubernetes.io/projected/1d17845a-89d6-405a-8125-0b326ec8894b-kube-api-access-djhlj\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.656870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a9138cd5-387e-43a3-bea6-17eb931827ef-metallb-excludel2\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.657110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-sockets\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.657298 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-metrics\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.657620 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-reloader\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.657951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-conf\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.658122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e40cbcec-a6c5-40c2-9e5b-651065d296fc-frr-startup\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.664327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e40cbcec-a6c5-40c2-9e5b-651065d296fc-metrics-certs\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.665223 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5c5f62ed-1a04-409e-93ca-5c8631fe51f4-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hs2fp\" (UID: \"5c5f62ed-1a04-409e-93ca-5c8631fe51f4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.673082 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4sj5\" (UniqueName: \"kubernetes.io/projected/e40cbcec-a6c5-40c2-9e5b-651065d296fc-kube-api-access-r4sj5\") pod \"frr-k8s-2qxkk\" (UID: \"e40cbcec-a6c5-40c2-9e5b-651065d296fc\") " pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.673518 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2t5l\" (UniqueName: \"kubernetes.io/projected/5c5f62ed-1a04-409e-93ca-5c8631fe51f4-kube-api-access-l2t5l\") pod \"frr-k8s-webhook-server-78b44bf5bb-hs2fp\" (UID: \"5c5f62ed-1a04-409e-93ca-5c8631fe51f4\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.703855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.713165 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.757863 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-metrics-certs\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.757927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-cert\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.757967 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.758003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9phl\" (UniqueName: \"kubernetes.io/projected/a9138cd5-387e-43a3-bea6-17eb931827ef-kube-api-access-h9phl\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.758041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-metrics-certs\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.758077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhlj\" (UniqueName: \"kubernetes.io/projected/1d17845a-89d6-405a-8125-0b326ec8894b-kube-api-access-djhlj\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.758126 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a9138cd5-387e-43a3-bea6-17eb931827ef-metallb-excludel2\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: E0217 20:21:51.758222 4793 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 17 20:21:51 crc kubenswrapper[4793]: E0217 20:21:51.758285 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-metrics-certs podName:1d17845a-89d6-405a-8125-0b326ec8894b nodeName:}" failed. No retries permitted until 2026-02-17 20:21:52.258267423 +0000 UTC m=+787.549965734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-metrics-certs") pod "controller-69bbfbf88f-jxxkv" (UID: "1d17845a-89d6-405a-8125-0b326ec8894b") : secret "controller-certs-secret" not found Feb 17 20:21:51 crc kubenswrapper[4793]: E0217 20:21:51.758407 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 20:21:51 crc kubenswrapper[4793]: E0217 20:21:51.758496 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist podName:a9138cd5-387e-43a3-bea6-17eb931827ef nodeName:}" failed. No retries permitted until 2026-02-17 20:21:52.258480518 +0000 UTC m=+787.550178829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist") pod "speaker-5qqnc" (UID: "a9138cd5-387e-43a3-bea6-17eb931827ef") : secret "metallb-memberlist" not found Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.759301 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a9138cd5-387e-43a3-bea6-17eb931827ef-metallb-excludel2\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.762481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-metrics-certs\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.763837 4793 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.772052 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-cert\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.773539 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9phl\" (UniqueName: \"kubernetes.io/projected/a9138cd5-387e-43a3-bea6-17eb931827ef-kube-api-access-h9phl\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:51 crc kubenswrapper[4793]: I0217 20:21:51.774050 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhlj\" (UniqueName: \"kubernetes.io/projected/1d17845a-89d6-405a-8125-0b326ec8894b-kube-api-access-djhlj\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.155349 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"a3ba968288a66d132a00c63b8b21550de6e92f61114e8108210c92ec2c69d43a"} Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.170960 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp"] Feb 17 20:21:52 crc kubenswrapper[4793]: W0217 20:21:52.173624 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c5f62ed_1a04_409e_93ca_5c8631fe51f4.slice/crio-d632fc631fb4412dcf99bdecedd584d68b41f2829574107e3683cb81574bdc79 WatchSource:0}: Error finding container d632fc631fb4412dcf99bdecedd584d68b41f2829574107e3683cb81574bdc79: Status 404 returned error can't find the container with id d632fc631fb4412dcf99bdecedd584d68b41f2829574107e3683cb81574bdc79 Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.264740 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.264817 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-metrics-certs\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:52 crc kubenswrapper[4793]: E0217 20:21:52.264964 4793 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 20:21:52 crc kubenswrapper[4793]: E0217 20:21:52.265108 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist podName:a9138cd5-387e-43a3-bea6-17eb931827ef nodeName:}" failed. No retries permitted until 2026-02-17 20:21:53.265073827 +0000 UTC m=+788.556772188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist") pod "speaker-5qqnc" (UID: "a9138cd5-387e-43a3-bea6-17eb931827ef") : secret "metallb-memberlist" not found Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.271152 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d17845a-89d6-405a-8125-0b326ec8894b-metrics-certs\") pod \"controller-69bbfbf88f-jxxkv\" (UID: \"1d17845a-89d6-405a-8125-0b326ec8894b\") " pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.382741 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:52 crc kubenswrapper[4793]: W0217 20:21:52.650547 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d17845a_89d6_405a_8125_0b326ec8894b.slice/crio-12b32d65727ed08cc826e697c937e7b9fb99cf03b1b433edcd65fd68224afa94 WatchSource:0}: Error finding container 12b32d65727ed08cc826e697c937e7b9fb99cf03b1b433edcd65fd68224afa94: Status 404 returned error can't find the container with id 12b32d65727ed08cc826e697c937e7b9fb99cf03b1b433edcd65fd68224afa94 Feb 17 20:21:52 crc kubenswrapper[4793]: I0217 20:21:52.654831 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jxxkv"] Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.165163 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jxxkv" event={"ID":"1d17845a-89d6-405a-8125-0b326ec8894b","Type":"ContainerStarted","Data":"659b2330503dc1ae13ecb3a0775f01aaabacb8adbd6f539ec477b8ff5aea6433"} Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.165507 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.165524 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jxxkv" event={"ID":"1d17845a-89d6-405a-8125-0b326ec8894b","Type":"ContainerStarted","Data":"c9228be567fbc33c0513e87d9e0e22dff97ee7f02f8e0b10dc4233e71edde74a"} Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.165536 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jxxkv" event={"ID":"1d17845a-89d6-405a-8125-0b326ec8894b","Type":"ContainerStarted","Data":"12b32d65727ed08cc826e697c937e7b9fb99cf03b1b433edcd65fd68224afa94"} Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.166823 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" event={"ID":"5c5f62ed-1a04-409e-93ca-5c8631fe51f4","Type":"ContainerStarted","Data":"d632fc631fb4412dcf99bdecedd584d68b41f2829574107e3683cb81574bdc79"} Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.278355 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.302593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a9138cd5-387e-43a3-bea6-17eb931827ef-memberlist\") pod \"speaker-5qqnc\" (UID: \"a9138cd5-387e-43a3-bea6-17eb931827ef\") " pod="metallb-system/speaker-5qqnc" Feb 17 20:21:53 crc kubenswrapper[4793]: I0217 20:21:53.569769 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5qqnc" Feb 17 20:21:53 crc kubenswrapper[4793]: W0217 20:21:53.656946 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9138cd5_387e_43a3_bea6_17eb931827ef.slice/crio-fae21092e11f39f5dfb9bdced879eea5b5671d0fa13820a7e1c86d857a6f1e26 WatchSource:0}: Error finding container fae21092e11f39f5dfb9bdced879eea5b5671d0fa13820a7e1c86d857a6f1e26: Status 404 returned error can't find the container with id fae21092e11f39f5dfb9bdced879eea5b5671d0fa13820a7e1c86d857a6f1e26 Feb 17 20:21:54 crc kubenswrapper[4793]: I0217 20:21:54.173629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5qqnc" event={"ID":"a9138cd5-387e-43a3-bea6-17eb931827ef","Type":"ContainerStarted","Data":"c5d58183364f70b22596133781e83717af29a5cc5d434377698cda5a464849a6"} Feb 17 20:21:54 crc kubenswrapper[4793]: I0217 20:21:54.173924 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5qqnc" event={"ID":"a9138cd5-387e-43a3-bea6-17eb931827ef","Type":"ContainerStarted","Data":"fae21092e11f39f5dfb9bdced879eea5b5671d0fa13820a7e1c86d857a6f1e26"} Feb 17 20:21:55 crc kubenswrapper[4793]: I0217 20:21:55.181253 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5qqnc" event={"ID":"a9138cd5-387e-43a3-bea6-17eb931827ef","Type":"ContainerStarted","Data":"c2430c1221ad2203491e9e93b43c1167f2b1c183cc1fbe58f6e2fed66425c1bc"} Feb 17 20:21:55 crc kubenswrapper[4793]: I0217 20:21:55.181419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5qqnc" Feb 17 20:21:55 crc kubenswrapper[4793]: I0217 20:21:55.200879 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5qqnc" podStartSLOduration=4.200858933 podStartE2EDuration="4.200858933s" podCreationTimestamp="2026-02-17 20:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:21:55.197858911 +0000 UTC m=+790.489557222" watchObservedRunningTime="2026-02-17 20:21:55.200858933 +0000 UTC m=+790.492557244" Feb 17 20:21:55 crc kubenswrapper[4793]: I0217 20:21:55.201222 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jxxkv" podStartSLOduration=4.201214881 podStartE2EDuration="4.201214881s" podCreationTimestamp="2026-02-17 20:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:21:53.190474909 +0000 UTC m=+788.482173220" watchObservedRunningTime="2026-02-17 20:21:55.201214881 +0000 UTC m=+790.492913212" Feb 17 20:22:00 crc kubenswrapper[4793]: I0217 20:22:00.214591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" event={"ID":"5c5f62ed-1a04-409e-93ca-5c8631fe51f4","Type":"ContainerStarted","Data":"5e33431166fd2b7a00b5472afd6d57b8ee180ae63bf1534598655c3da2f2f680"} Feb 17 20:22:00 crc kubenswrapper[4793]: I0217 20:22:00.215398 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:22:00 crc kubenswrapper[4793]: I0217 20:22:00.216756 4793 generic.go:334] "Generic (PLEG): container finished" podID="e40cbcec-a6c5-40c2-9e5b-651065d296fc" containerID="4bdeecaf37d8bea98fab245bed13d08301899a88f0e323ad09556e1e5a15cc87" exitCode=0 Feb 17 20:22:00 crc kubenswrapper[4793]: I0217 20:22:00.216812 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerDied","Data":"4bdeecaf37d8bea98fab245bed13d08301899a88f0e323ad09556e1e5a15cc87"} Feb 17 20:22:00 crc kubenswrapper[4793]: I0217 20:22:00.238613 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" podStartSLOduration=2.343160964 podStartE2EDuration="9.23859349s" podCreationTimestamp="2026-02-17 20:21:51 +0000 UTC" firstStartedPulling="2026-02-17 20:21:52.176218154 +0000 UTC m=+787.467916505" lastFinishedPulling="2026-02-17 20:21:59.07165072 +0000 UTC m=+794.363349031" observedRunningTime="2026-02-17 20:22:00.231835778 +0000 UTC m=+795.523534119" watchObservedRunningTime="2026-02-17 20:22:00.23859349 +0000 UTC m=+795.530291811" Feb 17 20:22:01 crc kubenswrapper[4793]: I0217 20:22:01.224348 4793 generic.go:334] "Generic (PLEG): container finished" podID="e40cbcec-a6c5-40c2-9e5b-651065d296fc" containerID="6feff15879eb2917096df277ea0bd25a623c0c2765de8f07464a19205c2292a8" exitCode=0 Feb 17 20:22:01 crc kubenswrapper[4793]: I0217 20:22:01.224443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerDied","Data":"6feff15879eb2917096df277ea0bd25a623c0c2765de8f07464a19205c2292a8"} Feb 17 20:22:02 crc kubenswrapper[4793]: I0217 20:22:02.232244 4793 generic.go:334] "Generic (PLEG): container finished" podID="e40cbcec-a6c5-40c2-9e5b-651065d296fc" containerID="815399fcb9674824d6558383856683a90fc73c4bc47af7a5c6373bc4926abd1e" exitCode=0 Feb 17 20:22:02 crc kubenswrapper[4793]: I0217 20:22:02.232342 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerDied","Data":"815399fcb9674824d6558383856683a90fc73c4bc47af7a5c6373bc4926abd1e"} Feb 17 20:22:02 crc kubenswrapper[4793]: I0217 20:22:02.386849 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jxxkv" Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.240914 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"ed63cbe8c18fad571e3804e0a60cf54f36af3332f778cb37adfe132ce02ccd73"} Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.241239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.241250 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"d40e7c4731c79b20a402bdd2aeab725fb9c01d5b3f844d0899b8041591ee2c9b"} Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.241259 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"8a3667662750fe4b62beeba980e3df03293f3163dfb4aa8e76a7c5c13e250bda"} Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.241268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"c9da991a82eb8842dd5d3d41c50c2f3d50923067a0a6785fffd2ed0bf857520f"} Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.241276 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"c81835c4176b32f916f72b61d305eb0db0f823b4a071d078d8698e0146342305"} Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.241285 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2qxkk" event={"ID":"e40cbcec-a6c5-40c2-9e5b-651065d296fc","Type":"ContainerStarted","Data":"0cf757dbcb78281db5e0f0e34022ef6be32b660cb30597c77b9f0be26370fdba"} Feb 17 20:22:03 crc kubenswrapper[4793]: I0217 20:22:03.262956 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2qxkk" podStartSLOduration=5.047784281 podStartE2EDuration="12.262940931s" podCreationTimestamp="2026-02-17 20:21:51 +0000 UTC" firstStartedPulling="2026-02-17 20:21:51.873615951 +0000 UTC m=+787.165314262" lastFinishedPulling="2026-02-17 20:21:59.088772591 +0000 UTC m=+794.380470912" observedRunningTime="2026-02-17 20:22:03.262582803 +0000 UTC m=+798.554281124" watchObservedRunningTime="2026-02-17 20:22:03.262940931 +0000 UTC m=+798.554639252" Feb 17 20:22:06 crc kubenswrapper[4793]: I0217 20:22:06.713960 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:22:06 crc kubenswrapper[4793]: I0217 20:22:06.765849 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:22:11 crc kubenswrapper[4793]: I0217 20:22:11.710873 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hs2fp" Feb 17 20:22:13 crc kubenswrapper[4793]: I0217 20:22:13.574774 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5qqnc" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.580418 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f62m6"] Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.581940 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.584516 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.584556 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.585267 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f62m6"] Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.586787 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-s5mqw" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.710550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbv4t\" (UniqueName: \"kubernetes.io/projected/670203de-1f33-4e1a-8b20-db77984be713-kube-api-access-hbv4t\") pod \"openstack-operator-index-f62m6\" (UID: \"670203de-1f33-4e1a-8b20-db77984be713\") " pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.811644 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbv4t\" (UniqueName: \"kubernetes.io/projected/670203de-1f33-4e1a-8b20-db77984be713-kube-api-access-hbv4t\") pod \"openstack-operator-index-f62m6\" (UID: \"670203de-1f33-4e1a-8b20-db77984be713\") " pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.834575 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbv4t\" (UniqueName: \"kubernetes.io/projected/670203de-1f33-4e1a-8b20-db77984be713-kube-api-access-hbv4t\") pod \"openstack-operator-index-f62m6\" (UID: \"670203de-1f33-4e1a-8b20-db77984be713\") " pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:19 crc kubenswrapper[4793]: I0217 20:22:19.931548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:20 crc kubenswrapper[4793]: I0217 20:22:20.409622 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f62m6"] Feb 17 20:22:20 crc kubenswrapper[4793]: W0217 20:22:20.424074 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670203de_1f33_4e1a_8b20_db77984be713.slice/crio-adf085319195bf2ad68d7d2b9070655272f11c02bcb5cf80667e54f8dd49145b WatchSource:0}: Error finding container adf085319195bf2ad68d7d2b9070655272f11c02bcb5cf80667e54f8dd49145b: Status 404 returned error can't find the container with id adf085319195bf2ad68d7d2b9070655272f11c02bcb5cf80667e54f8dd49145b Feb 17 20:22:21 crc kubenswrapper[4793]: I0217 20:22:21.401883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f62m6" event={"ID":"670203de-1f33-4e1a-8b20-db77984be713","Type":"ContainerStarted","Data":"adf085319195bf2ad68d7d2b9070655272f11c02bcb5cf80667e54f8dd49145b"} Feb 17 20:22:21 crc kubenswrapper[4793]: I0217 20:22:21.716154 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2qxkk" Feb 17 20:22:23 crc kubenswrapper[4793]: I0217 20:22:23.416473 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f62m6" event={"ID":"670203de-1f33-4e1a-8b20-db77984be713","Type":"ContainerStarted","Data":"193325646db9206bf2a2a7cb2ed7c7765d1667e4d036cf688d4e1b2ee6d9cbc8"} Feb 17 20:22:23 crc kubenswrapper[4793]: I0217 20:22:23.433290 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f62m6" podStartSLOduration=2.438867193 podStartE2EDuration="4.433260874s" podCreationTimestamp="2026-02-17 20:22:19 +0000 UTC" firstStartedPulling="2026-02-17 20:22:20.427020597 +0000 UTC m=+815.718718918" lastFinishedPulling="2026-02-17 20:22:22.421414288 +0000 UTC m=+817.713112599" observedRunningTime="2026-02-17 20:22:23.430664831 +0000 UTC m=+818.722363142" watchObservedRunningTime="2026-02-17 20:22:23.433260874 +0000 UTC m=+818.724959225" Feb 17 20:22:29 crc kubenswrapper[4793]: I0217 20:22:29.931652 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:29 crc kubenswrapper[4793]: I0217 20:22:29.932066 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:29 crc kubenswrapper[4793]: I0217 20:22:29.967560 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:30 crc kubenswrapper[4793]: I0217 20:22:30.501924 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-f62m6" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.006941 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s"] Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.008983 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.011578 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2pvnh" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.021598 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s"] Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.196872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74zg5\" (UniqueName: \"kubernetes.io/projected/0604f563-d59c-490e-8c54-749fb46ae122-kube-api-access-74zg5\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.196932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-bundle\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.196959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-util\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.298528 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-bundle\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.298901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-util\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.299047 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-bundle\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.299261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74zg5\" (UniqueName: \"kubernetes.io/projected/0604f563-d59c-490e-8c54-749fb46ae122-kube-api-access-74zg5\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.299396 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-util\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.326410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74zg5\" (UniqueName: \"kubernetes.io/projected/0604f563-d59c-490e-8c54-749fb46ae122-kube-api-access-74zg5\") pod \"182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.331138 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:37 crc kubenswrapper[4793]: I0217 20:22:37.798931 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s"] Feb 17 20:22:38 crc kubenswrapper[4793]: I0217 20:22:38.524007 4793 generic.go:334] "Generic (PLEG): container finished" podID="0604f563-d59c-490e-8c54-749fb46ae122" containerID="3ba2dda5e80c02d4b1fef9c580cf15fe0c356110112c591db9331953126ce94f" exitCode=0 Feb 17 20:22:38 crc kubenswrapper[4793]: I0217 20:22:38.524072 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" event={"ID":"0604f563-d59c-490e-8c54-749fb46ae122","Type":"ContainerDied","Data":"3ba2dda5e80c02d4b1fef9c580cf15fe0c356110112c591db9331953126ce94f"} Feb 17 20:22:38 crc kubenswrapper[4793]: I0217 20:22:38.524249 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" event={"ID":"0604f563-d59c-490e-8c54-749fb46ae122","Type":"ContainerStarted","Data":"2dbe776671b482772d328c7b0723e1233cb44182d232b9f52d8cfc4d58b4d3bb"} Feb 17 20:22:39 crc kubenswrapper[4793]: I0217 20:22:39.534575 4793 generic.go:334] "Generic (PLEG): container finished" podID="0604f563-d59c-490e-8c54-749fb46ae122" containerID="aaf98924f9aa9a9a9c616c11be4daefce536bca7d203406f93d10b46289874d9" exitCode=0 Feb 17 20:22:39 crc kubenswrapper[4793]: I0217 20:22:39.534768 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" event={"ID":"0604f563-d59c-490e-8c54-749fb46ae122","Type":"ContainerDied","Data":"aaf98924f9aa9a9a9c616c11be4daefce536bca7d203406f93d10b46289874d9"} Feb 17 20:22:40 crc kubenswrapper[4793]: I0217 20:22:40.543973 4793 generic.go:334] "Generic (PLEG): container finished" podID="0604f563-d59c-490e-8c54-749fb46ae122" containerID="26c905703d1df7804e5bcaf11e5127880d6826f2531132de8826f75851aeb1a4" exitCode=0 Feb 17 20:22:40 crc kubenswrapper[4793]: I0217 20:22:40.544008 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" event={"ID":"0604f563-d59c-490e-8c54-749fb46ae122","Type":"ContainerDied","Data":"26c905703d1df7804e5bcaf11e5127880d6826f2531132de8826f75851aeb1a4"} Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.797977 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.967936 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-util\") pod \"0604f563-d59c-490e-8c54-749fb46ae122\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.968012 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-bundle\") pod \"0604f563-d59c-490e-8c54-749fb46ae122\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.968164 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74zg5\" (UniqueName: \"kubernetes.io/projected/0604f563-d59c-490e-8c54-749fb46ae122-kube-api-access-74zg5\") pod \"0604f563-d59c-490e-8c54-749fb46ae122\" (UID: \"0604f563-d59c-490e-8c54-749fb46ae122\") " Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.969196 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-bundle" (OuterVolumeSpecName: "bundle") pod "0604f563-d59c-490e-8c54-749fb46ae122" (UID: "0604f563-d59c-490e-8c54-749fb46ae122"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.973450 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0604f563-d59c-490e-8c54-749fb46ae122-kube-api-access-74zg5" (OuterVolumeSpecName: "kube-api-access-74zg5") pod "0604f563-d59c-490e-8c54-749fb46ae122" (UID: "0604f563-d59c-490e-8c54-749fb46ae122"). InnerVolumeSpecName "kube-api-access-74zg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:22:41 crc kubenswrapper[4793]: I0217 20:22:41.980534 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-util" (OuterVolumeSpecName: "util") pod "0604f563-d59c-490e-8c54-749fb46ae122" (UID: "0604f563-d59c-490e-8c54-749fb46ae122"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:22:42 crc kubenswrapper[4793]: I0217 20:22:42.070110 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74zg5\" (UniqueName: \"kubernetes.io/projected/0604f563-d59c-490e-8c54-749fb46ae122-kube-api-access-74zg5\") on node \"crc\" DevicePath \"\"" Feb 17 20:22:42 crc kubenswrapper[4793]: I0217 20:22:42.070171 4793 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-util\") on node \"crc\" DevicePath \"\"" Feb 17 20:22:42 crc kubenswrapper[4793]: I0217 20:22:42.070200 4793 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0604f563-d59c-490e-8c54-749fb46ae122-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:22:42 crc kubenswrapper[4793]: I0217 20:22:42.563651 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" event={"ID":"0604f563-d59c-490e-8c54-749fb46ae122","Type":"ContainerDied","Data":"2dbe776671b482772d328c7b0723e1233cb44182d232b9f52d8cfc4d58b4d3bb"} Feb 17 20:22:42 crc kubenswrapper[4793]: I0217 20:22:42.563729 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbe776671b482772d328c7b0723e1233cb44182d232b9f52d8cfc4d58b4d3bb" Feb 17 20:22:42 crc kubenswrapper[4793]: I0217 20:22:42.563830 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.434622 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz"] Feb 17 20:22:44 crc kubenswrapper[4793]: E0217 20:22:44.435199 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="extract" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.435214 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="extract" Feb 17 20:22:44 crc kubenswrapper[4793]: E0217 20:22:44.435227 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="util" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.435238 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="util" Feb 17 20:22:44 crc kubenswrapper[4793]: E0217 20:22:44.435258 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="pull" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.435270 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="pull" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.435416 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0604f563-d59c-490e-8c54-749fb46ae122" containerName="extract" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.435931 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:22:44 crc kubenswrapper[4793]: W0217 20:22:44.437715 4793 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4jm5c": failed to list *v1.Secret: secrets "openstack-operator-controller-init-dockercfg-4jm5c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Feb 17 20:22:44 crc kubenswrapper[4793]: E0217 20:22:44.437769 4793 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-init-dockercfg-4jm5c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-operator-controller-init-dockercfg-4jm5c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.455023 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz"] Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.503991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzq7\" (UniqueName: \"kubernetes.io/projected/4f6459df-98f3-4dc2-92ef-6cc4b7739fb3-kube-api-access-sxzq7\") pod \"openstack-operator-controller-init-5dcc78df94-w8mdz\" (UID: \"4f6459df-98f3-4dc2-92ef-6cc4b7739fb3\") " pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.605044 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzq7\" (UniqueName: \"kubernetes.io/projected/4f6459df-98f3-4dc2-92ef-6cc4b7739fb3-kube-api-access-sxzq7\") pod \"openstack-operator-controller-init-5dcc78df94-w8mdz\" (UID: \"4f6459df-98f3-4dc2-92ef-6cc4b7739fb3\") " pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:22:44 crc kubenswrapper[4793]: I0217 20:22:44.627617 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzq7\" (UniqueName: \"kubernetes.io/projected/4f6459df-98f3-4dc2-92ef-6cc4b7739fb3-kube-api-access-sxzq7\") pod \"openstack-operator-controller-init-5dcc78df94-w8mdz\" (UID: \"4f6459df-98f3-4dc2-92ef-6cc4b7739fb3\") " pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:22:45 crc kubenswrapper[4793]: I0217 20:22:45.748585 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4jm5c" Feb 17 20:22:45 crc kubenswrapper[4793]: I0217 20:22:45.758629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:22:46 crc kubenswrapper[4793]: I0217 20:22:46.030630 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz"] Feb 17 20:22:46 crc kubenswrapper[4793]: I0217 20:22:46.605465 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" event={"ID":"4f6459df-98f3-4dc2-92ef-6cc4b7739fb3","Type":"ContainerStarted","Data":"15a5a3f62cf5921354e5604e091c842c3a67242c0208a827ac29de8074406f24"} Feb 17 20:22:49 crc kubenswrapper[4793]: I0217 20:22:49.625721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" event={"ID":"4f6459df-98f3-4dc2-92ef-6cc4b7739fb3","Type":"ContainerStarted","Data":"1606993e45695f613a6c508e03a1ed1a1dfbdd674634214f4fbc74188ef0ba4a"} Feb 17 20:22:49 crc kubenswrapper[4793]: I0217 20:22:49.626219 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:22:49 crc kubenswrapper[4793]: I0217 20:22:49.665162 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" podStartSLOduration=2.247826611 podStartE2EDuration="5.66513725s" podCreationTimestamp="2026-02-17 20:22:44 +0000 UTC" firstStartedPulling="2026-02-17 20:22:46.044634732 +0000 UTC m=+841.336333043" lastFinishedPulling="2026-02-17 20:22:49.461945361 +0000 UTC m=+844.753643682" observedRunningTime="2026-02-17 20:22:49.658259499 +0000 UTC m=+844.949957820" watchObservedRunningTime="2026-02-17 20:22:49.66513725 +0000 UTC m=+844.956835601" Feb 17 20:22:55 crc kubenswrapper[4793]: I0217 20:22:55.763538 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5dcc78df94-w8mdz" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.671101 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.672569 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.674788 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wkld8" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.684098 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.685069 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.695043 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.695459 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qcjsf" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.705956 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.715743 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-nvq46"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.716487 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.721452 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twfwk\" (UniqueName: \"kubernetes.io/projected/c5356f69-6595-4df6-804c-0bdc507e635a-kube-api-access-twfwk\") pod \"designate-operator-controller-manager-6d8bf5c495-stsjq\" (UID: \"c5356f69-6595-4df6-804c-0bdc507e635a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.721550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8m9l\" (UniqueName: \"kubernetes.io/projected/5805376c-5192-4fc2-a3b0-64eae2eaf7a1-kube-api-access-j8m9l\") pod \"barbican-operator-controller-manager-868647ff47-h4k6c\" (UID: \"5805376c-5192-4fc2-a3b0-64eae2eaf7a1\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.724850 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-b57rm" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.736049 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.736874 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.740943 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2nzfq" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.748448 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-nvq46"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.762547 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.763347 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.766952 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-r4d69" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.770617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.787711 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.792800 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.793665 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.797575 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-28rcq" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.797967 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.808479 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.809430 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.817057 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lvwj2" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.817258 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.817275 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.818153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.820324 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lwwmd" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.822717 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twfwk\" (UniqueName: \"kubernetes.io/projected/c5356f69-6595-4df6-804c-0bdc507e635a-kube-api-access-twfwk\") pod \"designate-operator-controller-manager-6d8bf5c495-stsjq\" (UID: \"c5356f69-6595-4df6-804c-0bdc507e635a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.822845 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plnmw\" (UniqueName: \"kubernetes.io/projected/a0c27d33-e835-4fe9-92e4-7846bb169b1a-kube-api-access-plnmw\") pod \"horizon-operator-controller-manager-5b9b8895d5-6spqn\" (UID: \"a0c27d33-e835-4fe9-92e4-7846bb169b1a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.823853 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmqx\" (UniqueName: \"kubernetes.io/projected/419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f-kube-api-access-7vmqx\") pod \"cinder-operator-controller-manager-5d946d989d-drgj7\" (UID: \"419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.823910 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8m9l\" (UniqueName: \"kubernetes.io/projected/5805376c-5192-4fc2-a3b0-64eae2eaf7a1-kube-api-access-j8m9l\") pod \"barbican-operator-controller-manager-868647ff47-h4k6c\" (UID: \"5805376c-5192-4fc2-a3b0-64eae2eaf7a1\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.823940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmmsl\" (UniqueName: \"kubernetes.io/projected/6dfe3dba-f71d-4de0-9e38-8d0a38c7f272-kube-api-access-vmmsl\") pod \"heat-operator-controller-manager-69f49c598c-n5b9b\" (UID: \"6dfe3dba-f71d-4de0-9e38-8d0a38c7f272\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.823968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfvt6\" (UniqueName: \"kubernetes.io/projected/94f226e8-038f-42f0-8ead-687af3df6d2b-kube-api-access-gfvt6\") pod \"glance-operator-controller-manager-77987464f4-nvq46\" (UID: \"94f226e8-038f-42f0-8ead-687af3df6d2b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.825742 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.877756 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.904361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8m9l\" (UniqueName: \"kubernetes.io/projected/5805376c-5192-4fc2-a3b0-64eae2eaf7a1-kube-api-access-j8m9l\") pod \"barbican-operator-controller-manager-868647ff47-h4k6c\" (UID: \"5805376c-5192-4fc2-a3b0-64eae2eaf7a1\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxj8\" (UniqueName: \"kubernetes.io/projected/85645037-549b-483a-a0db-76649c2e9d0f-kube-api-access-fbxj8\") pod \"ironic-operator-controller-manager-554564d7fc-wwhj6\" (UID: \"85645037-549b-483a-a0db-76649c2e9d0f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930621 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plnmw\" (UniqueName: \"kubernetes.io/projected/a0c27d33-e835-4fe9-92e4-7846bb169b1a-kube-api-access-plnmw\") pod \"horizon-operator-controller-manager-5b9b8895d5-6spqn\" (UID: \"a0c27d33-e835-4fe9-92e4-7846bb169b1a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmqx\" (UniqueName: \"kubernetes.io/projected/419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f-kube-api-access-7vmqx\") pod \"cinder-operator-controller-manager-5d946d989d-drgj7\" (UID: \"419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68r66\" (UniqueName: \"kubernetes.io/projected/50be19a7-d8db-4c5c-8966-825c6d3310c1-kube-api-access-68r66\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmmsl\" (UniqueName: \"kubernetes.io/projected/6dfe3dba-f71d-4de0-9e38-8d0a38c7f272-kube-api-access-vmmsl\") pod \"heat-operator-controller-manager-69f49c598c-n5b9b\" (UID: \"6dfe3dba-f71d-4de0-9e38-8d0a38c7f272\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.930825 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfvt6\" (UniqueName: \"kubernetes.io/projected/94f226e8-038f-42f0-8ead-687af3df6d2b-kube-api-access-gfvt6\") pod \"glance-operator-controller-manager-77987464f4-nvq46\" (UID: \"94f226e8-038f-42f0-8ead-687af3df6d2b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.936432 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twfwk\" (UniqueName: \"kubernetes.io/projected/c5356f69-6595-4df6-804c-0bdc507e635a-kube-api-access-twfwk\") pod \"designate-operator-controller-manager-6d8bf5c495-stsjq\" (UID: \"c5356f69-6595-4df6-804c-0bdc507e635a\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.963329 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.964755 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.965345 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmmsl\" (UniqueName: \"kubernetes.io/projected/6dfe3dba-f71d-4de0-9e38-8d0a38c7f272-kube-api-access-vmmsl\") pod \"heat-operator-controller-manager-69f49c598c-n5b9b\" (UID: \"6dfe3dba-f71d-4de0-9e38-8d0a38c7f272\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.968308 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9m72f" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.978297 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfvt6\" (UniqueName: \"kubernetes.io/projected/94f226e8-038f-42f0-8ead-687af3df6d2b-kube-api-access-gfvt6\") pod \"glance-operator-controller-manager-77987464f4-nvq46\" (UID: \"94f226e8-038f-42f0-8ead-687af3df6d2b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.980368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plnmw\" (UniqueName: \"kubernetes.io/projected/a0c27d33-e835-4fe9-92e4-7846bb169b1a-kube-api-access-plnmw\") pod \"horizon-operator-controller-manager-5b9b8895d5-6spqn\" (UID: \"a0c27d33-e835-4fe9-92e4-7846bb169b1a\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.993579 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w"] Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.994473 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:16 crc kubenswrapper[4793]: I0217 20:23:16.996327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmqx\" (UniqueName: \"kubernetes.io/projected/419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f-kube-api-access-7vmqx\") pod \"cinder-operator-controller-manager-5d946d989d-drgj7\" (UID: \"419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.003417 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.009733 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-59sfg" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.017046 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.019826 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.028557 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.040234 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr6fb\" (UniqueName: \"kubernetes.io/projected/df82a1fe-a387-45c4-bad1-d2fca982baaa-kube-api-access-lr6fb\") pod \"manila-operator-controller-manager-54f6768c69-qjv29\" (UID: \"df82a1fe-a387-45c4-bad1-d2fca982baaa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.040400 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxj8\" (UniqueName: \"kubernetes.io/projected/85645037-549b-483a-a0db-76649c2e9d0f-kube-api-access-fbxj8\") pod \"ironic-operator-controller-manager-554564d7fc-wwhj6\" (UID: \"85645037-549b-483a-a0db-76649c2e9d0f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.040439 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55nsb\" (UniqueName: \"kubernetes.io/projected/d0cd31cb-0cbc-414c-a20b-f6c38256f347-kube-api-access-55nsb\") pod \"keystone-operator-controller-manager-b4d948c87-wxb6w\" (UID: \"d0cd31cb-0cbc-414c-a20b-f6c38256f347\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.040471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68r66\" (UniqueName: \"kubernetes.io/projected/50be19a7-d8db-4c5c-8966-825c6d3310c1-kube-api-access-68r66\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.040498 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.040633 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.040681 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert podName:50be19a7-d8db-4c5c-8966-825c6d3310c1 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:17.540664377 +0000 UTC m=+872.832362688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert") pod "infra-operator-controller-manager-79d975b745-mnpjn" (UID: "50be19a7-d8db-4c5c-8966-825c6d3310c1") : secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.046666 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.047499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.051000 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rt4vx" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.052130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.062465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.066524 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68r66\" (UniqueName: \"kubernetes.io/projected/50be19a7-d8db-4c5c-8966-825c6d3310c1-kube-api-access-68r66\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.066791 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.067177 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxj8\" (UniqueName: \"kubernetes.io/projected/85645037-549b-483a-a0db-76649c2e9d0f-kube-api-access-fbxj8\") pod \"ironic-operator-controller-manager-554564d7fc-wwhj6\" (UID: \"85645037-549b-483a-a0db-76649c2e9d0f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.067507 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.069644 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jnjlk" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.080749 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.082118 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.095309 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.096298 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.099330 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9k4zs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.107916 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.108996 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.119981 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.134076 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.134986 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.137351 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9w8mg" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.141420 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr6fb\" (UniqueName: \"kubernetes.io/projected/df82a1fe-a387-45c4-bad1-d2fca982baaa-kube-api-access-lr6fb\") pod \"manila-operator-controller-manager-54f6768c69-qjv29\" (UID: \"df82a1fe-a387-45c4-bad1-d2fca982baaa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.141453 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvn2r\" (UniqueName: \"kubernetes.io/projected/0b1ec5e9-161a-403b-bedd-b6c5823180db-kube-api-access-wvn2r\") pod \"neutron-operator-controller-manager-64ddbf8bb-m46dz\" (UID: \"0b1ec5e9-161a-403b-bedd-b6c5823180db\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.141496 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55nsb\" (UniqueName: \"kubernetes.io/projected/d0cd31cb-0cbc-414c-a20b-f6c38256f347-kube-api-access-55nsb\") pod \"keystone-operator-controller-manager-b4d948c87-wxb6w\" (UID: \"d0cd31cb-0cbc-414c-a20b-f6c38256f347\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.141521 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8pr\" (UniqueName: \"kubernetes.io/projected/2d46ef99-7efb-4cb1-8970-10cc188a3bb2-kube-api-access-2s8pr\") pod \"nova-operator-controller-manager-567668f5cf-lnvx2\" (UID: \"2d46ef99-7efb-4cb1-8970-10cc188a3bb2\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.141561 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlfd\" (UniqueName: \"kubernetes.io/projected/8eeed758-99d2-46c6-be0f-381d8eea293a-kube-api-access-tzlfd\") pod \"mariadb-operator-controller-manager-6994f66f48-kb62n\" (UID: \"8eeed758-99d2-46c6-be0f-381d8eea293a\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.148995 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.165378 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.168233 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr6fb\" (UniqueName: \"kubernetes.io/projected/df82a1fe-a387-45c4-bad1-d2fca982baaa-kube-api-access-lr6fb\") pod \"manila-operator-controller-manager-54f6768c69-qjv29\" (UID: \"df82a1fe-a387-45c4-bad1-d2fca982baaa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.171195 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.171402 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.173342 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-h5qzv" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.175740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55nsb\" (UniqueName: \"kubernetes.io/projected/d0cd31cb-0cbc-414c-a20b-f6c38256f347-kube-api-access-55nsb\") pod \"keystone-operator-controller-manager-b4d948c87-wxb6w\" (UID: \"d0cd31cb-0cbc-414c-a20b-f6c38256f347\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.179500 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.180818 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.182313 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.182681 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ncwmg" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.189588 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.195235 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.196098 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.197935 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j5txw" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.205860 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.214056 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.230398 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.230441 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.230510 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.233791 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9fnwg" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242422 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq7d2\" (UniqueName: \"kubernetes.io/projected/6ae6b399-39bc-446a-af92-4d4b7fc18361-kube-api-access-fq7d2\") pod \"octavia-operator-controller-manager-69f8888797-jrjzp\" (UID: \"6ae6b399-39bc-446a-af92-4d4b7fc18361\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvn2r\" (UniqueName: \"kubernetes.io/projected/0b1ec5e9-161a-403b-bedd-b6c5823180db-kube-api-access-wvn2r\") pod \"neutron-operator-controller-manager-64ddbf8bb-m46dz\" (UID: \"0b1ec5e9-161a-403b-bedd-b6c5823180db\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqqp\" (UniqueName: \"kubernetes.io/projected/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-kube-api-access-zkqqp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242537 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8pr\" (UniqueName: \"kubernetes.io/projected/2d46ef99-7efb-4cb1-8970-10cc188a3bb2-kube-api-access-2s8pr\") pod \"nova-operator-controller-manager-567668f5cf-lnvx2\" (UID: \"2d46ef99-7efb-4cb1-8970-10cc188a3bb2\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrm9b\" (UniqueName: \"kubernetes.io/projected/c0c6fc5a-a96e-4165-a9b9-a4ae7b099216-kube-api-access-jrm9b\") pod \"placement-operator-controller-manager-8497b45c89-n6jbv\" (UID: \"c0c6fc5a-a96e-4165-a9b9-a4ae7b099216\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242572 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9s7\" (UniqueName: \"kubernetes.io/projected/f641768c-418b-46cd-83be-16acad24aa35-kube-api-access-7s9s7\") pod \"ovn-operator-controller-manager-d44cf6b75-k6mbs\" (UID: \"f641768c-418b-46cd-83be-16acad24aa35\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.242634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzlfd\" (UniqueName: \"kubernetes.io/projected/8eeed758-99d2-46c6-be0f-381d8eea293a-kube-api-access-tzlfd\") pod \"mariadb-operator-controller-manager-6994f66f48-kb62n\" (UID: \"8eeed758-99d2-46c6-be0f-381d8eea293a\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.256616 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.257964 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.264994 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jjgf8" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.265968 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.283003 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8pr\" (UniqueName: \"kubernetes.io/projected/2d46ef99-7efb-4cb1-8970-10cc188a3bb2-kube-api-access-2s8pr\") pod \"nova-operator-controller-manager-567668f5cf-lnvx2\" (UID: \"2d46ef99-7efb-4cb1-8970-10cc188a3bb2\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.287658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvn2r\" (UniqueName: \"kubernetes.io/projected/0b1ec5e9-161a-403b-bedd-b6c5823180db-kube-api-access-wvn2r\") pod \"neutron-operator-controller-manager-64ddbf8bb-m46dz\" (UID: \"0b1ec5e9-161a-403b-bedd-b6c5823180db\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.293832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzlfd\" (UniqueName: \"kubernetes.io/projected/8eeed758-99d2-46c6-be0f-381d8eea293a-kube-api-access-tzlfd\") pod \"mariadb-operator-controller-manager-6994f66f48-kb62n\" (UID: \"8eeed758-99d2-46c6-be0f-381d8eea293a\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.301460 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-9kgb7"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.302219 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.304631 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6szzp" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.312497 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-9kgb7"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346114 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzfd\" (UniqueName: \"kubernetes.io/projected/00b83896-fe8e-49bc-b762-6dfb14777fd7-kube-api-access-crzfd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-4v2ch\" (UID: \"00b83896-fe8e-49bc-b762-6dfb14777fd7\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346176 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq7d2\" (UniqueName: \"kubernetes.io/projected/6ae6b399-39bc-446a-af92-4d4b7fc18361-kube-api-access-fq7d2\") pod \"octavia-operator-controller-manager-69f8888797-jrjzp\" (UID: \"6ae6b399-39bc-446a-af92-4d4b7fc18361\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqqp\" (UniqueName: \"kubernetes.io/projected/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-kube-api-access-zkqqp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346254 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4gvv\" (UniqueName: \"kubernetes.io/projected/130aaa99-ec87-4c6d-8eaa-e553872e6df8-kube-api-access-r4gvv\") pod \"test-operator-controller-manager-7866795846-9kgb7\" (UID: \"130aaa99-ec87-4c6d-8eaa-e553872e6df8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgff\" (UniqueName: \"kubernetes.io/projected/53c19b0b-735c-46ca-99f1-4be322dddb16-kube-api-access-5wgff\") pod \"swift-operator-controller-manager-68f46476f-4qdc2\" (UID: \"53c19b0b-735c-46ca-99f1-4be322dddb16\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346337 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s9s7\" (UniqueName: \"kubernetes.io/projected/f641768c-418b-46cd-83be-16acad24aa35-kube-api-access-7s9s7\") pod \"ovn-operator-controller-manager-d44cf6b75-k6mbs\" (UID: \"f641768c-418b-46cd-83be-16acad24aa35\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346363 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrm9b\" (UniqueName: \"kubernetes.io/projected/c0c6fc5a-a96e-4165-a9b9-a4ae7b099216-kube-api-access-jrm9b\") pod \"placement-operator-controller-manager-8497b45c89-n6jbv\" (UID: \"c0c6fc5a-a96e-4165-a9b9-a4ae7b099216\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.346388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.357910 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.358010 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert podName:95cbe9b0-bd61-493b-a8b2-f5d70b515ed7 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:17.857991073 +0000 UTC m=+873.149689384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" (UID: "95cbe9b0-bd61-493b-a8b2-f5d70b515ed7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.365335 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.372539 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq7d2\" (UniqueName: \"kubernetes.io/projected/6ae6b399-39bc-446a-af92-4d4b7fc18361-kube-api-access-fq7d2\") pod \"octavia-operator-controller-manager-69f8888797-jrjzp\" (UID: \"6ae6b399-39bc-446a-af92-4d4b7fc18361\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.377099 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.394241 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s9s7\" (UniqueName: \"kubernetes.io/projected/f641768c-418b-46cd-83be-16acad24aa35-kube-api-access-7s9s7\") pod \"ovn-operator-controller-manager-d44cf6b75-k6mbs\" (UID: \"f641768c-418b-46cd-83be-16acad24aa35\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.428055 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrm9b\" (UniqueName: \"kubernetes.io/projected/c0c6fc5a-a96e-4165-a9b9-a4ae7b099216-kube-api-access-jrm9b\") pod \"placement-operator-controller-manager-8497b45c89-n6jbv\" (UID: \"c0c6fc5a-a96e-4165-a9b9-a4ae7b099216\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.432525 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.432966 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-95859678f-wk66s"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.433238 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqqp\" (UniqueName: \"kubernetes.io/projected/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-kube-api-access-zkqqp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.433845 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.434379 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.435264 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.436818 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qt28d" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.453121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzfd\" (UniqueName: \"kubernetes.io/projected/00b83896-fe8e-49bc-b762-6dfb14777fd7-kube-api-access-crzfd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-4v2ch\" (UID: \"00b83896-fe8e-49bc-b762-6dfb14777fd7\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.453211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4gvv\" (UniqueName: \"kubernetes.io/projected/130aaa99-ec87-4c6d-8eaa-e553872e6df8-kube-api-access-r4gvv\") pod \"test-operator-controller-manager-7866795846-9kgb7\" (UID: \"130aaa99-ec87-4c6d-8eaa-e553872e6df8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.453251 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgff\" (UniqueName: \"kubernetes.io/projected/53c19b0b-735c-46ca-99f1-4be322dddb16-kube-api-access-5wgff\") pod \"swift-operator-controller-manager-68f46476f-4qdc2\" (UID: \"53c19b0b-735c-46ca-99f1-4be322dddb16\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.453957 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.484422 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-95859678f-wk66s"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.490786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzfd\" (UniqueName: \"kubernetes.io/projected/00b83896-fe8e-49bc-b762-6dfb14777fd7-kube-api-access-crzfd\") pod \"telemetry-operator-controller-manager-7f45b4ff68-4v2ch\" (UID: \"00b83896-fe8e-49bc-b762-6dfb14777fd7\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.494040 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.504152 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4gvv\" (UniqueName: \"kubernetes.io/projected/130aaa99-ec87-4c6d-8eaa-e553872e6df8-kube-api-access-r4gvv\") pod \"test-operator-controller-manager-7866795846-9kgb7\" (UID: \"130aaa99-ec87-4c6d-8eaa-e553872e6df8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.536092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.558593 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhr4\" (UniqueName: \"kubernetes.io/projected/10ee5096-1369-4d4f-90ed-d4160f7a2aa4-kube-api-access-hzhr4\") pod \"watcher-operator-controller-manager-95859678f-wk66s\" (UID: \"10ee5096-1369-4d4f-90ed-d4160f7a2aa4\") " pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.558857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.558988 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.559033 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert podName:50be19a7-d8db-4c5c-8966-825c6d3310c1 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:18.559018298 +0000 UTC m=+873.850716609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert") pod "infra-operator-controller-manager-79d975b745-mnpjn" (UID: "50be19a7-d8db-4c5c-8966-825c6d3310c1") : secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.575024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgff\" (UniqueName: \"kubernetes.io/projected/53c19b0b-735c-46ca-99f1-4be322dddb16-kube-api-access-5wgff\") pod \"swift-operator-controller-manager-68f46476f-4qdc2\" (UID: \"53c19b0b-735c-46ca-99f1-4be322dddb16\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.633135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.663632 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhr4\" (UniqueName: \"kubernetes.io/projected/10ee5096-1369-4d4f-90ed-d4160f7a2aa4-kube-api-access-hzhr4\") pod \"watcher-operator-controller-manager-95859678f-wk66s\" (UID: \"10ee5096-1369-4d4f-90ed-d4160f7a2aa4\") " pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.680913 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhr4\" (UniqueName: \"kubernetes.io/projected/10ee5096-1369-4d4f-90ed-d4160f7a2aa4-kube-api-access-hzhr4\") pod \"watcher-operator-controller-manager-95859678f-wk66s\" (UID: \"10ee5096-1369-4d4f-90ed-d4160f7a2aa4\") " pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.688778 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.732225 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.732258 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.732983 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.733003 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.733524 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.733590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.734061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.736184 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.736391 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pxhjl" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.736635 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wg9rs" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.736754 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.765564 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.765622 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldh4\" (UniqueName: \"kubernetes.io/projected/db37e414-596f-4c82-9b3b-f7c08820df82-kube-api-access-8ldh4\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.765752 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgbb\" (UniqueName: \"kubernetes.io/projected/a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0-kube-api-access-gzgbb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gmkz\" (UID: \"a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.765797 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.802732 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.840211 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.852068 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq"] Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.852345 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.871788 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgbb\" (UniqueName: \"kubernetes.io/projected/a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0-kube-api-access-gzgbb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gmkz\" (UID: \"a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.871857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.871895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.871931 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldh4\" (UniqueName: \"kubernetes.io/projected/db37e414-596f-4c82-9b3b-f7c08820df82-kube-api-access-8ldh4\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.872008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.872166 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.872225 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert podName:95cbe9b0-bd61-493b-a8b2-f5d70b515ed7 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:18.872206381 +0000 UTC m=+874.163904692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" (UID: "95cbe9b0-bd61-493b-a8b2-f5d70b515ed7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.872955 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.873032 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:18.373014171 +0000 UTC m=+873.664712482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "webhook-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.873078 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: E0217 20:23:17.873099 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:18.373093613 +0000 UTC m=+873.664791924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "metrics-server-cert" not found Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.876775 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" event={"ID":"5805376c-5192-4fc2-a3b0-64eae2eaf7a1","Type":"ContainerStarted","Data":"9db7c6b3229278b56807718b36d37fba2d86a69bd519add33d7543e6d930967c"} Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.938048 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgbb\" (UniqueName: \"kubernetes.io/projected/a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0-kube-api-access-gzgbb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gmkz\" (UID: \"a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" Feb 17 20:23:17 crc kubenswrapper[4793]: I0217 20:23:17.947655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldh4\" (UniqueName: \"kubernetes.io/projected/db37e414-596f-4c82-9b3b-f7c08820df82-kube-api-access-8ldh4\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.066950 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.388486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.388861 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.388889 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.388938 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:19.388918111 +0000 UTC m=+874.680616422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "metrics-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.389120 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.389203 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:19.389179107 +0000 UTC m=+874.680877458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "webhook-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.520582 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-nvq46"] Feb 17 20:23:18 crc kubenswrapper[4793]: W0217 20:23:18.559023 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85645037_549b_483a_a0db_76649c2e9d0f.slice/crio-f3ac61a6f569b213d6f5ea5787d950a90be40513222d0ac1c3dfe121dfd8a22e WatchSource:0}: Error finding container f3ac61a6f569b213d6f5ea5787d950a90be40513222d0ac1c3dfe121dfd8a22e: Status 404 returned error can't find the container with id f3ac61a6f569b213d6f5ea5787d950a90be40513222d0ac1c3dfe121dfd8a22e Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.565646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.581484 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.592423 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.592544 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.592591 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert podName:50be19a7-d8db-4c5c-8966-825c6d3310c1 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:20.592577261 +0000 UTC m=+875.884275572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert") pod "infra-operator-controller-manager-79d975b745-mnpjn" (UID: "50be19a7-d8db-4c5c-8966-825c6d3310c1") : secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.831723 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.870317 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-95859678f-wk66s"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.895078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.895350 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.895397 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert podName:95cbe9b0-bd61-493b-a8b2-f5d70b515ed7 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:20.895381096 +0000 UTC m=+876.187079407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" (UID: "95cbe9b0-bd61-493b-a8b2-f5d70b515ed7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.895653 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.897030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" event={"ID":"c5356f69-6595-4df6-804c-0bdc507e635a","Type":"ContainerStarted","Data":"e3ee63f4d2a7f31a1276669114e7c0ac2613237ec96bdf3246d9306caa16de3d"} Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.900327 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" event={"ID":"419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f","Type":"ContainerStarted","Data":"6c74a899e059935842a2f3a477367b56d7f21b17b2a152ed671f8c80f557a3bc"} Feb 17 20:23:18 crc kubenswrapper[4793]: W0217 20:23:18.905471 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf641768c_418b_46cd_83be_16acad24aa35.slice/crio-37f5674b13a4133ce6868d5fdc7c37b49e0371046c752a5a68ec6cd479ab7bf6 WatchSource:0}: Error finding container 37f5674b13a4133ce6868d5fdc7c37b49e0371046c752a5a68ec6cd479ab7bf6: Status 404 returned error can't find the container with id 37f5674b13a4133ce6868d5fdc7c37b49e0371046c752a5a68ec6cd479ab7bf6 Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.905946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" event={"ID":"6dfe3dba-f71d-4de0-9e38-8d0a38c7f272","Type":"ContainerStarted","Data":"dc89d05252e7babb04d25cbbf598ac9874e22405ce3f5a381d07fdf902bc74c5"} Feb 17 20:23:18 crc kubenswrapper[4793]: W0217 20:23:18.910371 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod130aaa99_ec87_4c6d_8eaa_e553872e6df8.slice/crio-5784cc39007edae2d525831d6171ac67da78ac6504e3d73c01c69315f2c1b8dc WatchSource:0}: Error finding container 5784cc39007edae2d525831d6171ac67da78ac6504e3d73c01c69315f2c1b8dc: Status 404 returned error can't find the container with id 5784cc39007edae2d525831d6171ac67da78ac6504e3d73c01c69315f2c1b8dc Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.910725 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.911027 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" event={"ID":"94f226e8-038f-42f0-8ead-687af3df6d2b","Type":"ContainerStarted","Data":"2f0e98f1775c69fbf4e92f13c5157ae5ed10bfbc95ae96ff70dae3d71c4c6757"} Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.912608 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" event={"ID":"8eeed758-99d2-46c6-be0f-381d8eea293a","Type":"ContainerStarted","Data":"06d397cbe02148edbd5ed6a92ba3d8112bdb0727cdce428db901d7e12521f89b"} Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.913520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" event={"ID":"85645037-549b-483a-a0db-76649c2e9d0f","Type":"ContainerStarted","Data":"f3ac61a6f569b213d6f5ea5787d950a90be40513222d0ac1c3dfe121dfd8a22e"} Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.932277 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn"] Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.932112 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lr6fb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-qjv29_openstack-operators(df82a1fe-a387-45c4-bad1-d2fca982baaa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.933999 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" podUID="df82a1fe-a387-45c4-bad1-d2fca982baaa" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.934325 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5wgff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-4qdc2_openstack-operators(53c19b0b-735c-46ca-99f1-4be322dddb16): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.935412 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" podUID="53c19b0b-735c-46ca-99f1-4be322dddb16" Feb 17 20:23:18 crc kubenswrapper[4793]: W0217 20:23:18.946031 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f00eb1_a1d7_48a6_ab40_e2dfc33769e0.slice/crio-28b1c006369f0138059f5a04c6398788c00945b817b360ac949d288dd82e6071 WatchSource:0}: Error finding container 28b1c006369f0138059f5a04c6398788c00945b817b360ac949d288dd82e6071: Status 404 returned error can't find the container with id 28b1c006369f0138059f5a04c6398788c00945b817b360ac949d288dd82e6071 Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.951957 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvn2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-m46dz_openstack-operators(0b1ec5e9-161a-403b-bedd-b6c5823180db): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.952833 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w"] Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.953263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" podUID="0b1ec5e9-161a-403b-bedd-b6c5823180db" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.954648 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq7d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-jrjzp_openstack-operators(6ae6b399-39bc-446a-af92-4d4b7fc18361): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.955748 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" podUID="6ae6b399-39bc-446a-af92-4d4b7fc18361" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.956843 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crzfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-4v2ch_openstack-operators(00b83896-fe8e-49bc-b762-6dfb14777fd7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 20:23:18 crc kubenswrapper[4793]: E0217 20:23:18.958208 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" podUID="00b83896-fe8e-49bc-b762-6dfb14777fd7" Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.966564 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.975797 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.986246 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz"] Feb 17 20:23:18 crc kubenswrapper[4793]: I0217 20:23:18.995132 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp"] Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.001977 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29"] Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.009262 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-9kgb7"] Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.016355 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2"] Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.022224 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz"] Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.401509 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.401577 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.401898 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.401966 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:21.401948514 +0000 UTC m=+876.693646825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "webhook-server-cert" not found Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.402003 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.402087 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:21.402066177 +0000 UTC m=+876.693764578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "metrics-server-cert" not found Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.923585 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" event={"ID":"6ae6b399-39bc-446a-af92-4d4b7fc18361","Type":"ContainerStarted","Data":"842189c382d23c397755abd7352b854a3720b412a6f59f837e2ab39109e628b1"} Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.925492 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" podUID="6ae6b399-39bc-446a-af92-4d4b7fc18361" Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.925949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" event={"ID":"d0cd31cb-0cbc-414c-a20b-f6c38256f347","Type":"ContainerStarted","Data":"27c256671180b957c73aab55287066ab29378595a1350477f05d8f7f1644a9bf"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.929394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" event={"ID":"c0c6fc5a-a96e-4165-a9b9-a4ae7b099216","Type":"ContainerStarted","Data":"2104c8d07075930d43b741d4a853d945977a76902777b7d4470f83937c1a37f3"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.931755 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" event={"ID":"00b83896-fe8e-49bc-b762-6dfb14777fd7","Type":"ContainerStarted","Data":"9b5d9a695dc1e62c625ed5d28a47c907bf89a4f1e247a9481d8949b49ddada93"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.935256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" event={"ID":"10ee5096-1369-4d4f-90ed-d4160f7a2aa4","Type":"ContainerStarted","Data":"b6e46e2371334555c2b815e238263089abbb3161edf2e613d98922e68b8e356e"} Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.937051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" podUID="00b83896-fe8e-49bc-b762-6dfb14777fd7" Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.941817 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" event={"ID":"df82a1fe-a387-45c4-bad1-d2fca982baaa","Type":"ContainerStarted","Data":"cce89ff2162a8456f2e71de65a7420f29459250e1ab36a03c7755a43a34a2750"} Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.943063 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" podUID="df82a1fe-a387-45c4-bad1-d2fca982baaa" Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.944208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" event={"ID":"0b1ec5e9-161a-403b-bedd-b6c5823180db","Type":"ContainerStarted","Data":"0fcd7bc39bdc99e9009d0b60377db5cc9f41354c08abae9c6769e142814a93d3"} Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.953091 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" podUID="0b1ec5e9-161a-403b-bedd-b6c5823180db" Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.953886 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" event={"ID":"130aaa99-ec87-4c6d-8eaa-e553872e6df8","Type":"ContainerStarted","Data":"5784cc39007edae2d525831d6171ac67da78ac6504e3d73c01c69315f2c1b8dc"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.964582 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" event={"ID":"a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0","Type":"ContainerStarted","Data":"28b1c006369f0138059f5a04c6398788c00945b817b360ac949d288dd82e6071"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.971907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" event={"ID":"2d46ef99-7efb-4cb1-8970-10cc188a3bb2","Type":"ContainerStarted","Data":"c2274f9732f98b3b78e3f4140e80a32553006ae5af4adbf1e43135d126ce64ae"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.978332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" event={"ID":"a0c27d33-e835-4fe9-92e4-7846bb169b1a","Type":"ContainerStarted","Data":"b4ace0f0522d60a4d80cd18013043e1ea9c8e4149020e3f5fc0ee131bfc806ad"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.980155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" event={"ID":"f641768c-418b-46cd-83be-16acad24aa35","Type":"ContainerStarted","Data":"37f5674b13a4133ce6868d5fdc7c37b49e0371046c752a5a68ec6cd479ab7bf6"} Feb 17 20:23:19 crc kubenswrapper[4793]: I0217 20:23:19.982169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" event={"ID":"53c19b0b-735c-46ca-99f1-4be322dddb16","Type":"ContainerStarted","Data":"2f658f843e630d850d01e33b0fc5742cb611d9e6202a8f9f97c1a0edfc13e748"} Feb 17 20:23:19 crc kubenswrapper[4793]: E0217 20:23:19.984287 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" podUID="53c19b0b-735c-46ca-99f1-4be322dddb16" Feb 17 20:23:20 crc kubenswrapper[4793]: I0217 20:23:20.102156 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:23:20 crc kubenswrapper[4793]: I0217 20:23:20.102227 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:23:20 crc kubenswrapper[4793]: I0217 20:23:20.621207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:20 crc kubenswrapper[4793]: E0217 20:23:20.621407 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:20 crc kubenswrapper[4793]: E0217 20:23:20.621457 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert podName:50be19a7-d8db-4c5c-8966-825c6d3310c1 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:24.621443466 +0000 UTC m=+879.913141777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert") pod "infra-operator-controller-manager-79d975b745-mnpjn" (UID: "50be19a7-d8db-4c5c-8966-825c6d3310c1") : secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:20 crc kubenswrapper[4793]: I0217 20:23:20.925479 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:20 crc kubenswrapper[4793]: E0217 20:23:20.925673 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:20 crc kubenswrapper[4793]: E0217 20:23:20.925731 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert podName:95cbe9b0-bd61-493b-a8b2-f5d70b515ed7 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:24.925718467 +0000 UTC m=+880.217416778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" (UID: "95cbe9b0-bd61-493b-a8b2-f5d70b515ed7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:20 crc kubenswrapper[4793]: E0217 20:23:20.992137 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" podUID="0b1ec5e9-161a-403b-bedd-b6c5823180db" Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:20.992722 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" podUID="df82a1fe-a387-45c4-bad1-d2fca982baaa" Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:20.992821 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" podUID="00b83896-fe8e-49bc-b762-6dfb14777fd7" Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:20.992899 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" podUID="6ae6b399-39bc-446a-af92-4d4b7fc18361" Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:21.001089 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" podUID="53c19b0b-735c-46ca-99f1-4be322dddb16" Feb 17 20:23:21 crc kubenswrapper[4793]: I0217 20:23:21.452540 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:21.452761 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 20:23:21 crc kubenswrapper[4793]: I0217 20:23:21.453930 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:21.453963 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:25.453939474 +0000 UTC m=+880.745637865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "metrics-server-cert" not found Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:21.454282 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 20:23:21 crc kubenswrapper[4793]: E0217 20:23:21.455252 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:25.455229036 +0000 UTC m=+880.746927377 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "webhook-server-cert" not found Feb 17 20:23:24 crc kubenswrapper[4793]: I0217 20:23:24.715835 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:24 crc kubenswrapper[4793]: E0217 20:23:24.716396 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:24 crc kubenswrapper[4793]: E0217 20:23:24.716461 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert podName:50be19a7-d8db-4c5c-8966-825c6d3310c1 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:32.716442584 +0000 UTC m=+888.008140895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert") pod "infra-operator-controller-manager-79d975b745-mnpjn" (UID: "50be19a7-d8db-4c5c-8966-825c6d3310c1") : secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:25 crc kubenswrapper[4793]: I0217 20:23:25.019445 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:25 crc kubenswrapper[4793]: E0217 20:23:25.019609 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:25 crc kubenswrapper[4793]: E0217 20:23:25.019671 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert podName:95cbe9b0-bd61-493b-a8b2-f5d70b515ed7 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:33.019654918 +0000 UTC m=+888.311353229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" (UID: "95cbe9b0-bd61-493b-a8b2-f5d70b515ed7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:25 crc kubenswrapper[4793]: I0217 20:23:25.527603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:25 crc kubenswrapper[4793]: I0217 20:23:25.527777 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:25 crc kubenswrapper[4793]: E0217 20:23:25.527920 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 20:23:25 crc kubenswrapper[4793]: E0217 20:23:25.527988 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:33.527958689 +0000 UTC m=+888.819657000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "metrics-server-cert" not found Feb 17 20:23:25 crc kubenswrapper[4793]: E0217 20:23:25.528001 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 20:23:25 crc kubenswrapper[4793]: E0217 20:23:25.528148 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:33.528111533 +0000 UTC m=+888.819809914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "webhook-server-cert" not found Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.439324 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dsctq"] Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.461637 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dsctq"] Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.461822 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.563184 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-catalog-content\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.563458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj6ll\" (UniqueName: \"kubernetes.io/projected/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-kube-api-access-rj6ll\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.563625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-utilities\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.664792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-catalog-content\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.664979 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj6ll\" (UniqueName: \"kubernetes.io/projected/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-kube-api-access-rj6ll\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.665067 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-utilities\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.665504 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-utilities\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.666083 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-catalog-content\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.699625 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj6ll\" (UniqueName: \"kubernetes.io/projected/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-kube-api-access-rj6ll\") pod \"redhat-operators-dsctq\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:27 crc kubenswrapper[4793]: I0217 20:23:27.820801 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.552339 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:23:31 crc kubenswrapper[4793]: E0217 20:23:31.617589 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 20:23:31 crc kubenswrapper[4793]: E0217 20:23:31.617808 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-55nsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-wxb6w_openstack-operators(d0cd31cb-0cbc-414c-a20b-f6c38256f347): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:23:31 crc kubenswrapper[4793]: E0217 20:23:31.619150 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" podUID="d0cd31cb-0cbc-414c-a20b-f6c38256f347" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.723136 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zhgpn"] Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.726875 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.734124 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhgpn"] Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.826337 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-catalog-content\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.826436 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-utilities\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.826533 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kgj\" (UniqueName: \"kubernetes.io/projected/52552d30-68bc-4e2c-82d0-2900e4834680-kube-api-access-n9kgj\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.927974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-utilities\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.928065 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9kgj\" (UniqueName: \"kubernetes.io/projected/52552d30-68bc-4e2c-82d0-2900e4834680-kube-api-access-n9kgj\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.928505 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-utilities\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.928584 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-catalog-content\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.928967 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-catalog-content\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:31 crc kubenswrapper[4793]: I0217 20:23:31.946675 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9kgj\" (UniqueName: \"kubernetes.io/projected/52552d30-68bc-4e2c-82d0-2900e4834680-kube-api-access-n9kgj\") pod \"redhat-marketplace-zhgpn\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:32 crc kubenswrapper[4793]: I0217 20:23:32.062619 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:32 crc kubenswrapper[4793]: E0217 20:23:32.093219 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" podUID="d0cd31cb-0cbc-414c-a20b-f6c38256f347" Feb 17 20:23:32 crc kubenswrapper[4793]: I0217 20:23:32.740435 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:32 crc kubenswrapper[4793]: E0217 20:23:32.740618 4793 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:32 crc kubenswrapper[4793]: E0217 20:23:32.740829 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert podName:50be19a7-d8db-4c5c-8966-825c6d3310c1 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:48.740812302 +0000 UTC m=+904.032510613 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert") pod "infra-operator-controller-manager-79d975b745-mnpjn" (UID: "50be19a7-d8db-4c5c-8966-825c6d3310c1") : secret "infra-operator-webhook-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: I0217 20:23:33.044718 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:33 crc kubenswrapper[4793]: E0217 20:23:33.044918 4793 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: E0217 20:23:33.044990 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert podName:95cbe9b0-bd61-493b-a8b2-f5d70b515ed7 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:49.04497234 +0000 UTC m=+904.336670661 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" (UID: "95cbe9b0-bd61-493b-a8b2-f5d70b515ed7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: I0217 20:23:33.478584 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dsctq"] Feb 17 20:23:33 crc kubenswrapper[4793]: I0217 20:23:33.558628 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:33 crc kubenswrapper[4793]: I0217 20:23:33.558667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:33 crc kubenswrapper[4793]: E0217 20:23:33.559239 4793 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: E0217 20:23:33.559319 4793 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: E0217 20:23:33.559328 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:49.559309071 +0000 UTC m=+904.851007442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "metrics-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: E0217 20:23:33.559395 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs podName:db37e414-596f-4c82-9b3b-f7c08820df82 nodeName:}" failed. No retries permitted until 2026-02-17 20:23:49.559378953 +0000 UTC m=+904.851077264 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs") pod "openstack-operator-controller-manager-ccd97cd7c-w9x9r" (UID: "db37e414-596f-4c82-9b3b-f7c08820df82") : secret "webhook-server-cert" not found Feb 17 20:23:33 crc kubenswrapper[4793]: I0217 20:23:33.656760 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhgpn"] Feb 17 20:23:33 crc kubenswrapper[4793]: W0217 20:23:33.690073 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52552d30_68bc_4e2c_82d0_2900e4834680.slice/crio-e35332e0822b885559c03bad13f59482fee90f1c1a91990cf50af4dde312c6c1 WatchSource:0}: Error finding container e35332e0822b885559c03bad13f59482fee90f1c1a91990cf50af4dde312c6c1: Status 404 returned error can't find the container with id e35332e0822b885559c03bad13f59482fee90f1c1a91990cf50af4dde312c6c1 Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.115613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" event={"ID":"419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f","Type":"ContainerStarted","Data":"ae378ad98b891807f201b4d1c94ff4c8bb1afa373c0992e635378097b5aaae00"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.115932 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.119083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" event={"ID":"5805376c-5192-4fc2-a3b0-64eae2eaf7a1","Type":"ContainerStarted","Data":"8415a3d4311f3ee0ad1685e16f684e026e28c48e04b30d551a3d431ca4e24a71"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.119389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.134295 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" event={"ID":"94f226e8-038f-42f0-8ead-687af3df6d2b","Type":"ContainerStarted","Data":"361ab2691c9c3e8434e456c17c1b5ba3479481fce0af729e8c1d55a977166d58"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.134425 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.142299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" event={"ID":"c5356f69-6595-4df6-804c-0bdc507e635a","Type":"ContainerStarted","Data":"2b65a3ef31d084c50cb28a90dfcccd0e709faeba0d139e7eb852d56b0992a4b6"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.142415 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.152900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" event={"ID":"f641768c-418b-46cd-83be-16acad24aa35","Type":"ContainerStarted","Data":"b5d96a7e38d835f7841e81bc975cd6edeef2e7caeeec6a80679e7ba4af0fadae"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.153080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.154550 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" podStartSLOduration=4.4247644 podStartE2EDuration="18.154540022s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:17.878895617 +0000 UTC m=+873.170593918" lastFinishedPulling="2026-02-17 20:23:31.608671229 +0000 UTC m=+886.900369540" observedRunningTime="2026-02-17 20:23:34.148307228 +0000 UTC m=+889.440005539" watchObservedRunningTime="2026-02-17 20:23:34.154540022 +0000 UTC m=+889.446238333" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.162710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" event={"ID":"8eeed758-99d2-46c6-be0f-381d8eea293a","Type":"ContainerStarted","Data":"4781fffec44a29b403d98c44f8691328eed3d6713423abc520f095e6317bd499"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.163356 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.173413 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" event={"ID":"85645037-549b-483a-a0db-76649c2e9d0f","Type":"ContainerStarted","Data":"ac363bd723ee28531f28826497525cb0e13d6bcc9217b375fc869be34007a0ed"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.174427 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.187759 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" podStartSLOduration=4.4582551519999996 podStartE2EDuration="18.187733897s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:17.879193434 +0000 UTC m=+873.170891745" lastFinishedPulling="2026-02-17 20:23:31.608672179 +0000 UTC m=+886.900370490" observedRunningTime="2026-02-17 20:23:34.170991331 +0000 UTC m=+889.462689642" watchObservedRunningTime="2026-02-17 20:23:34.187733897 +0000 UTC m=+889.479432218" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.209742 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" event={"ID":"130aaa99-ec87-4c6d-8eaa-e553872e6df8","Type":"ContainerStarted","Data":"ddd9781782737ebaff26b84a8566e1644786fe88588f608660dce6e9b7ccdf57"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.210551 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.242570 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" podStartSLOduration=4.473976813 podStartE2EDuration="18.242554819s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.552396733 +0000 UTC m=+873.844095044" lastFinishedPulling="2026-02-17 20:23:32.320974739 +0000 UTC m=+887.612673050" observedRunningTime="2026-02-17 20:23:34.218657456 +0000 UTC m=+889.510355767" watchObservedRunningTime="2026-02-17 20:23:34.242554819 +0000 UTC m=+889.534253130" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.262711 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" event={"ID":"2d46ef99-7efb-4cb1-8970-10cc188a3bb2","Type":"ContainerStarted","Data":"2e7177fab63cbfff6867152e97dd00cd545ff75fa79310126a0fdfcf0767c408"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.263411 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.264763 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" podStartSLOduration=4.686934975 podStartE2EDuration="18.264746461s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:17.454811419 +0000 UTC m=+872.746509730" lastFinishedPulling="2026-02-17 20:23:31.032622905 +0000 UTC m=+886.324321216" observedRunningTime="2026-02-17 20:23:34.263605453 +0000 UTC m=+889.555303764" watchObservedRunningTime="2026-02-17 20:23:34.264746461 +0000 UTC m=+889.556444772" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.286096 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" event={"ID":"a0c27d33-e835-4fe9-92e4-7846bb169b1a","Type":"ContainerStarted","Data":"35535be86ee6da06661337f375f4f7c25b8c223d5dd135928e0d16e2a6768891"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.286721 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.310181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" event={"ID":"6dfe3dba-f71d-4de0-9e38-8d0a38c7f272","Type":"ContainerStarted","Data":"ec4df1a2acb963938d6e0f1ba8e5a0426afeb0a6d01868d5ccc8a50b9fb3ed30"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.310810 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.315549 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" podStartSLOduration=6.166649505 podStartE2EDuration="18.315532173s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.883831569 +0000 UTC m=+874.175529880" lastFinishedPulling="2026-02-17 20:23:31.032714237 +0000 UTC m=+886.324412548" observedRunningTime="2026-02-17 20:23:34.293156117 +0000 UTC m=+889.584854438" watchObservedRunningTime="2026-02-17 20:23:34.315532173 +0000 UTC m=+889.607230484" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.326953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" event={"ID":"c0c6fc5a-a96e-4165-a9b9-a4ae7b099216","Type":"ContainerStarted","Data":"eaf4fff4d0ae4a834951e49d0a7c898eeb9fccf07e0c3c392a2e5a9fa8f11183"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.327548 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.339313 4793 generic.go:334] "Generic (PLEG): container finished" podID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerID="f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5" exitCode=0 Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.339370 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerDied","Data":"f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.339397 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerStarted","Data":"ef9c543593bd17874bca8090a3921031eed5f5086f16f5277f54e11e660518c5"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.341378 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" podStartSLOduration=5.653520524 podStartE2EDuration="18.341370095s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.921124996 +0000 UTC m=+874.212823307" lastFinishedPulling="2026-02-17 20:23:31.608974567 +0000 UTC m=+886.900672878" observedRunningTime="2026-02-17 20:23:34.339208491 +0000 UTC m=+889.630906802" watchObservedRunningTime="2026-02-17 20:23:34.341370095 +0000 UTC m=+889.633068406" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.344674 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" podStartSLOduration=3.851016873 podStartE2EDuration="18.344668027s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.573822145 +0000 UTC m=+873.865520456" lastFinishedPulling="2026-02-17 20:23:33.067473299 +0000 UTC m=+888.359171610" observedRunningTime="2026-02-17 20:23:34.316955048 +0000 UTC m=+889.608653359" watchObservedRunningTime="2026-02-17 20:23:34.344668027 +0000 UTC m=+889.636366338" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.364511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" event={"ID":"a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0","Type":"ContainerStarted","Data":"ec779e74304b09b6b7b72ed1bd6f6782c692bc3903700b2ca88942b940862c4e"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.369354 4793 generic.go:334] "Generic (PLEG): container finished" podID="52552d30-68bc-4e2c-82d0-2900e4834680" containerID="7b0a82c4ea6bf6b4e30293d6a5ec051ea6dcd81d63abb3b22a91ce0c09670c12" exitCode=0 Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.369459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerDied","Data":"7b0a82c4ea6bf6b4e30293d6a5ec051ea6dcd81d63abb3b22a91ce0c09670c12"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.369493 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerStarted","Data":"e35332e0822b885559c03bad13f59482fee90f1c1a91990cf50af4dde312c6c1"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.377504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" event={"ID":"10ee5096-1369-4d4f-90ed-d4160f7a2aa4","Type":"ContainerStarted","Data":"9f6746a8fbddb7dd2f48fe62cd02434127cb96939037fb84e0e334adf36609d9"} Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.378329 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.379324 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" podStartSLOduration=3.238365127 podStartE2EDuration="17.379311357s" podCreationTimestamp="2026-02-17 20:23:17 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.927955805 +0000 UTC m=+874.219654116" lastFinishedPulling="2026-02-17 20:23:33.068901995 +0000 UTC m=+888.360600346" observedRunningTime="2026-02-17 20:23:34.369006691 +0000 UTC m=+889.660705012" watchObservedRunningTime="2026-02-17 20:23:34.379311357 +0000 UTC m=+889.671009668" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.413109 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" podStartSLOduration=4.134174778 podStartE2EDuration="18.413090406s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.897893608 +0000 UTC m=+874.189591919" lastFinishedPulling="2026-02-17 20:23:33.176809236 +0000 UTC m=+888.468507547" observedRunningTime="2026-02-17 20:23:34.40196949 +0000 UTC m=+889.693667811" watchObservedRunningTime="2026-02-17 20:23:34.413090406 +0000 UTC m=+889.704788717" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.428243 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" podStartSLOduration=5.949294622 podStartE2EDuration="18.428228512s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.553676915 +0000 UTC m=+873.845375226" lastFinishedPulling="2026-02-17 20:23:31.032610775 +0000 UTC m=+886.324309116" observedRunningTime="2026-02-17 20:23:34.427675769 +0000 UTC m=+889.719374080" watchObservedRunningTime="2026-02-17 20:23:34.428228512 +0000 UTC m=+889.719926823" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.490456 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gmkz" podStartSLOduration=3.346761 podStartE2EDuration="17.490440478s" podCreationTimestamp="2026-02-17 20:23:17 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.953832558 +0000 UTC m=+874.245530869" lastFinishedPulling="2026-02-17 20:23:33.097512026 +0000 UTC m=+888.389210347" observedRunningTime="2026-02-17 20:23:34.452215828 +0000 UTC m=+889.743914159" watchObservedRunningTime="2026-02-17 20:23:34.490440478 +0000 UTC m=+889.782138789" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.522203 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" podStartSLOduration=5.099320991 podStartE2EDuration="18.522186147s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.898163645 +0000 UTC m=+874.189861956" lastFinishedPulling="2026-02-17 20:23:32.321028791 +0000 UTC m=+887.612727112" observedRunningTime="2026-02-17 20:23:34.491926685 +0000 UTC m=+889.783624996" watchObservedRunningTime="2026-02-17 20:23:34.522186147 +0000 UTC m=+889.813884458" Feb 17 20:23:34 crc kubenswrapper[4793]: I0217 20:23:34.606089 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" podStartSLOduration=4.464044105 podStartE2EDuration="18.606075622s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.925504224 +0000 UTC m=+874.217202535" lastFinishedPulling="2026-02-17 20:23:33.067535741 +0000 UTC m=+888.359234052" observedRunningTime="2026-02-17 20:23:34.564992051 +0000 UTC m=+889.856690352" watchObservedRunningTime="2026-02-17 20:23:34.606075622 +0000 UTC m=+889.897773933" Feb 17 20:23:35 crc kubenswrapper[4793]: I0217 20:23:35.407887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerStarted","Data":"fc0800c50ca8794af0f0d8cfce473460a88bee4c5f9e66b0f64775dd31f87ef5"} Feb 17 20:23:35 crc kubenswrapper[4793]: I0217 20:23:35.435182 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" podStartSLOduration=4.294377428 podStartE2EDuration="18.435164784s" podCreationTimestamp="2026-02-17 20:23:17 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.925572706 +0000 UTC m=+874.217271017" lastFinishedPulling="2026-02-17 20:23:33.066360062 +0000 UTC m=+888.358058373" observedRunningTime="2026-02-17 20:23:34.626231313 +0000 UTC m=+889.917929624" watchObservedRunningTime="2026-02-17 20:23:35.435164784 +0000 UTC m=+890.726863095" Feb 17 20:23:36 crc kubenswrapper[4793]: I0217 20:23:36.434293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerStarted","Data":"bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2"} Feb 17 20:23:36 crc kubenswrapper[4793]: I0217 20:23:36.436289 4793 generic.go:334] "Generic (PLEG): container finished" podID="52552d30-68bc-4e2c-82d0-2900e4834680" containerID="fc0800c50ca8794af0f0d8cfce473460a88bee4c5f9e66b0f64775dd31f87ef5" exitCode=0 Feb 17 20:23:36 crc kubenswrapper[4793]: I0217 20:23:36.437356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerDied","Data":"fc0800c50ca8794af0f0d8cfce473460a88bee4c5f9e66b0f64775dd31f87ef5"} Feb 17 20:23:37 crc kubenswrapper[4793]: I0217 20:23:37.445904 4793 generic.go:334] "Generic (PLEG): container finished" podID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerID="bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2" exitCode=0 Feb 17 20:23:37 crc kubenswrapper[4793]: I0217 20:23:37.446016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerDied","Data":"bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2"} Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.022388 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4f4bj"] Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.025165 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.033234 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4f4bj"] Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.159391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l966n\" (UniqueName: \"kubernetes.io/projected/95440bb7-98fc-47c0-8da1-1363773a2503-kube-api-access-l966n\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.159496 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-utilities\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.159540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-catalog-content\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: E0217 20:23:46.188544 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 17 20:23:46 crc kubenswrapper[4793]: E0217 20:23:46.188734 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq7d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-jrjzp_openstack-operators(6ae6b399-39bc-446a-af92-4d4b7fc18361): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:23:46 crc kubenswrapper[4793]: E0217 20:23:46.189926 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" podUID="6ae6b399-39bc-446a-af92-4d4b7fc18361" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.260404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-utilities\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.260780 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-catalog-content\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.260888 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l966n\" (UniqueName: \"kubernetes.io/projected/95440bb7-98fc-47c0-8da1-1363773a2503-kube-api-access-l966n\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.260994 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-utilities\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.261242 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-catalog-content\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.281844 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l966n\" (UniqueName: \"kubernetes.io/projected/95440bb7-98fc-47c0-8da1-1363773a2503-kube-api-access-l966n\") pod \"community-operators-4f4bj\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:46 crc kubenswrapper[4793]: I0217 20:23:46.346299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.007519 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-h4k6c" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.040540 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-stsjq" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.055123 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-nvq46" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.077928 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-drgj7" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.087371 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-n5b9b" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.111438 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-6spqn" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.177150 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwhj6" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.435772 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-lnvx2" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.436167 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-kb62n" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.498999 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-k6mbs" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.547383 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-n6jbv" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.693027 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-9kgb7" Feb 17 20:23:47 crc kubenswrapper[4793]: I0217 20:23:47.806872 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-95859678f-wk66s" Feb 17 20:23:47 crc kubenswrapper[4793]: E0217 20:23:47.973031 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 17 20:23:47 crc kubenswrapper[4793]: E0217 20:23:47.973179 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lr6fb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-qjv29_openstack-operators(df82a1fe-a387-45c4-bad1-d2fca982baaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:23:47 crc kubenswrapper[4793]: E0217 20:23:47.975456 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" podUID="df82a1fe-a387-45c4-bad1-d2fca982baaa" Feb 17 20:23:48 crc kubenswrapper[4793]: E0217 20:23:48.416304 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" Feb 17 20:23:48 crc kubenswrapper[4793]: E0217 20:23:48.416448 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crzfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-4v2ch_openstack-operators(00b83896-fe8e-49bc-b762-6dfb14777fd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:23:48 crc kubenswrapper[4793]: E0217 20:23:48.417653 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" podUID="00b83896-fe8e-49bc-b762-6dfb14777fd7" Feb 17 20:23:48 crc kubenswrapper[4793]: I0217 20:23:48.616987 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4f4bj"] Feb 17 20:23:48 crc kubenswrapper[4793]: W0217 20:23:48.628398 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95440bb7_98fc_47c0_8da1_1363773a2503.slice/crio-8f589067c70e251174fb58f30b6887e05b23af084644b4c922f217f27266a9f6 WatchSource:0}: Error finding container 8f589067c70e251174fb58f30b6887e05b23af084644b4c922f217f27266a9f6: Status 404 returned error can't find the container with id 8f589067c70e251174fb58f30b6887e05b23af084644b4c922f217f27266a9f6 Feb 17 20:23:48 crc kubenswrapper[4793]: I0217 20:23:48.799449 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:48 crc kubenswrapper[4793]: I0217 20:23:48.811014 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50be19a7-d8db-4c5c-8966-825c6d3310c1-cert\") pod \"infra-operator-controller-manager-79d975b745-mnpjn\" (UID: \"50be19a7-d8db-4c5c-8966-825c6d3310c1\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:48 crc kubenswrapper[4793]: I0217 20:23:48.938323 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lvwj2" Feb 17 20:23:48 crc kubenswrapper[4793]: I0217 20:23:48.956658 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.106697 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.112300 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95cbe9b0-bd61-493b-a8b2-f5d70b515ed7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d\" (UID: \"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.184070 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn"] Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.311445 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ncwmg" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.319575 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.551457 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerStarted","Data":"12fb1677e168173631f4cf9d4b1a1c0abbbd620c3150e5cef42b0078aff0c559"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.555918 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" event={"ID":"0b1ec5e9-161a-403b-bedd-b6c5823180db","Type":"ContainerStarted","Data":"f1f177aae18eadbda0c345e605b8aae29dd1e6f1f7604b1103aee01787e1dc30"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.556165 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.557056 4793 generic.go:334] "Generic (PLEG): container finished" podID="95440bb7-98fc-47c0-8da1-1363773a2503" containerID="c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61" exitCode=0 Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.557105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerDied","Data":"c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.557124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerStarted","Data":"8f589067c70e251174fb58f30b6887e05b23af084644b4c922f217f27266a9f6"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.558609 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" event={"ID":"d0cd31cb-0cbc-414c-a20b-f6c38256f347","Type":"ContainerStarted","Data":"257ecc577d2395b8dab0751f0edb0ef20e5a85747a1f0d78dbe6444a8bb16a66"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.558957 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.559907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" event={"ID":"50be19a7-d8db-4c5c-8966-825c6d3310c1","Type":"ContainerStarted","Data":"2d7ce6142e7bb6b02caa890f90f4cea42c11b6faf25c79a4ec10734de9cc0645"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.561737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" event={"ID":"53c19b0b-735c-46ca-99f1-4be322dddb16","Type":"ContainerStarted","Data":"e8b5b25eacac1d68be2319f686102b68649f1eb7b75c05168de4b9ad62df466c"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.561958 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.563243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerStarted","Data":"aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f"} Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.567935 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zhgpn" podStartSLOduration=4.391229181 podStartE2EDuration="18.56791914s" podCreationTimestamp="2026-02-17 20:23:31 +0000 UTC" firstStartedPulling="2026-02-17 20:23:34.3718082 +0000 UTC m=+889.663506511" lastFinishedPulling="2026-02-17 20:23:48.548498169 +0000 UTC m=+903.840196470" observedRunningTime="2026-02-17 20:23:49.566555577 +0000 UTC m=+904.858253888" watchObservedRunningTime="2026-02-17 20:23:49.56791914 +0000 UTC m=+904.859617441" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.604353 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" podStartSLOduration=3.975373742 podStartE2EDuration="33.604329635s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.925701459 +0000 UTC m=+874.217399770" lastFinishedPulling="2026-02-17 20:23:48.554657352 +0000 UTC m=+903.846355663" observedRunningTime="2026-02-17 20:23:49.586041731 +0000 UTC m=+904.877740052" watchObservedRunningTime="2026-02-17 20:23:49.604329635 +0000 UTC m=+904.896027946" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.608813 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" podStartSLOduration=4.139563673 podStartE2EDuration="33.608795276s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.951584383 +0000 UTC m=+874.243282694" lastFinishedPulling="2026-02-17 20:23:48.420815996 +0000 UTC m=+903.712514297" observedRunningTime="2026-02-17 20:23:49.605890394 +0000 UTC m=+904.897588705" watchObservedRunningTime="2026-02-17 20:23:49.608795276 +0000 UTC m=+904.900493587" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.614660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.614898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.618156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-metrics-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.632139 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/db37e414-596f-4c82-9b3b-f7c08820df82-webhook-certs\") pod \"openstack-operator-controller-manager-ccd97cd7c-w9x9r\" (UID: \"db37e414-596f-4c82-9b3b-f7c08820df82\") " pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.656285 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" podStartSLOduration=3.159780375 podStartE2EDuration="32.656268656s" podCreationTimestamp="2026-02-17 20:23:17 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.934018156 +0000 UTC m=+874.225716467" lastFinishedPulling="2026-02-17 20:23:48.430506437 +0000 UTC m=+903.722204748" observedRunningTime="2026-02-17 20:23:49.648566655 +0000 UTC m=+904.940264966" watchObservedRunningTime="2026-02-17 20:23:49.656268656 +0000 UTC m=+904.947966967" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.685525 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dsctq" podStartSLOduration=8.596125835 podStartE2EDuration="22.685509993s" podCreationTimestamp="2026-02-17 20:23:27 +0000 UTC" firstStartedPulling="2026-02-17 20:23:34.353899426 +0000 UTC m=+889.645597737" lastFinishedPulling="2026-02-17 20:23:48.443283564 +0000 UTC m=+903.734981895" observedRunningTime="2026-02-17 20:23:49.675949715 +0000 UTC m=+904.967648016" watchObservedRunningTime="2026-02-17 20:23:49.685509993 +0000 UTC m=+904.977208294" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.758996 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d"] Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.891006 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pxhjl" Feb 17 20:23:49 crc kubenswrapper[4793]: I0217 20:23:49.898348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:50 crc kubenswrapper[4793]: I0217 20:23:50.101394 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:23:50 crc kubenswrapper[4793]: I0217 20:23:50.101810 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:23:50 crc kubenswrapper[4793]: I0217 20:23:50.426010 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r"] Feb 17 20:23:50 crc kubenswrapper[4793]: W0217 20:23:50.435825 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb37e414_596f_4c82_9b3b_f7c08820df82.slice/crio-21f8f5c4744e7c48b8b062f7ee5a46925f84ed349ed3729df1b74e819a7cd838 WatchSource:0}: Error finding container 21f8f5c4744e7c48b8b062f7ee5a46925f84ed349ed3729df1b74e819a7cd838: Status 404 returned error can't find the container with id 21f8f5c4744e7c48b8b062f7ee5a46925f84ed349ed3729df1b74e819a7cd838 Feb 17 20:23:50 crc kubenswrapper[4793]: I0217 20:23:50.574158 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerStarted","Data":"ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764"} Feb 17 20:23:50 crc kubenswrapper[4793]: I0217 20:23:50.575060 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" event={"ID":"db37e414-596f-4c82-9b3b-f7c08820df82","Type":"ContainerStarted","Data":"21f8f5c4744e7c48b8b062f7ee5a46925f84ed349ed3729df1b74e819a7cd838"} Feb 17 20:23:50 crc kubenswrapper[4793]: I0217 20:23:50.576007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" event={"ID":"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7","Type":"ContainerStarted","Data":"89ab551a854a556c0be1fd4d9dcf5bc50f91f1e2be2ef32bd0b13c29213707f2"} Feb 17 20:23:51 crc kubenswrapper[4793]: I0217 20:23:51.587349 4793 generic.go:334] "Generic (PLEG): container finished" podID="95440bb7-98fc-47c0-8da1-1363773a2503" containerID="ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764" exitCode=0 Feb 17 20:23:51 crc kubenswrapper[4793]: I0217 20:23:51.587570 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerDied","Data":"ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764"} Feb 17 20:23:51 crc kubenswrapper[4793]: I0217 20:23:51.598639 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" event={"ID":"db37e414-596f-4c82-9b3b-f7c08820df82","Type":"ContainerStarted","Data":"e1cc220e392ce60dd6656b6a4186b65a85258db6dfe837acca3a7e59eb69b926"} Feb 17 20:23:51 crc kubenswrapper[4793]: I0217 20:23:51.598835 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:23:51 crc kubenswrapper[4793]: I0217 20:23:51.636790 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" podStartSLOduration=34.6367704 podStartE2EDuration="34.6367704s" podCreationTimestamp="2026-02-17 20:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:23:51.632378811 +0000 UTC m=+906.924077122" watchObservedRunningTime="2026-02-17 20:23:51.6367704 +0000 UTC m=+906.928468711" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.063265 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.063625 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.116838 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.607219 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerStarted","Data":"22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350"} Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.608821 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" event={"ID":"50be19a7-d8db-4c5c-8966-825c6d3310c1","Type":"ContainerStarted","Data":"4beafa56ef34112630a0f94f65f849d7bcac9ea212640c93e7d0295898753f79"} Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.608898 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.610887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" event={"ID":"95cbe9b0-bd61-493b-a8b2-f5d70b515ed7","Type":"ContainerStarted","Data":"7759b264c8a5a450371bcb8985c1a7e24c9417c20a2f1247f1eb2f0a0d5f81c9"} Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.611199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.628084 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4f4bj" podStartSLOduration=4.810287322 podStartE2EDuration="7.628063622s" podCreationTimestamp="2026-02-17 20:23:45 +0000 UTC" firstStartedPulling="2026-02-17 20:23:49.559024269 +0000 UTC m=+904.850722580" lastFinishedPulling="2026-02-17 20:23:52.376800559 +0000 UTC m=+907.668498880" observedRunningTime="2026-02-17 20:23:52.624793221 +0000 UTC m=+907.916491542" watchObservedRunningTime="2026-02-17 20:23:52.628063622 +0000 UTC m=+907.919761933" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.641467 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" podStartSLOduration=33.833269163 podStartE2EDuration="36.641449005s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:49.185563629 +0000 UTC m=+904.477261940" lastFinishedPulling="2026-02-17 20:23:51.993743471 +0000 UTC m=+907.285441782" observedRunningTime="2026-02-17 20:23:52.636894472 +0000 UTC m=+907.928592793" watchObservedRunningTime="2026-02-17 20:23:52.641449005 +0000 UTC m=+907.933147316" Feb 17 20:23:52 crc kubenswrapper[4793]: I0217 20:23:52.662176 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" podStartSLOduration=34.455331691 podStartE2EDuration="36.662162449s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:49.789504927 +0000 UTC m=+905.081203238" lastFinishedPulling="2026-02-17 20:23:51.996335675 +0000 UTC m=+907.288033996" observedRunningTime="2026-02-17 20:23:52.658370995 +0000 UTC m=+907.950069306" watchObservedRunningTime="2026-02-17 20:23:52.662162449 +0000 UTC m=+907.953860760" Feb 17 20:23:56 crc kubenswrapper[4793]: I0217 20:23:56.347636 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:56 crc kubenswrapper[4793]: I0217 20:23:56.347963 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:56 crc kubenswrapper[4793]: I0217 20:23:56.395914 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:23:57 crc kubenswrapper[4793]: I0217 20:23:57.381051 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxb6w" Feb 17 20:23:57 crc kubenswrapper[4793]: I0217 20:23:57.440287 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m46dz" Feb 17 20:23:57 crc kubenswrapper[4793]: I0217 20:23:57.820965 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:57 crc kubenswrapper[4793]: I0217 20:23:57.821203 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:57 crc kubenswrapper[4793]: I0217 20:23:57.854731 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-4qdc2" Feb 17 20:23:57 crc kubenswrapper[4793]: I0217 20:23:57.883585 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:58 crc kubenswrapper[4793]: I0217 20:23:58.700971 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:23:58 crc kubenswrapper[4793]: I0217 20:23:58.963729 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-mnpjn" Feb 17 20:23:59 crc kubenswrapper[4793]: I0217 20:23:59.332430 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d" Feb 17 20:23:59 crc kubenswrapper[4793]: I0217 20:23:59.384123 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dsctq"] Feb 17 20:23:59 crc kubenswrapper[4793]: E0217 20:23:59.542931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" podUID="df82a1fe-a387-45c4-bad1-d2fca982baaa" Feb 17 20:23:59 crc kubenswrapper[4793]: I0217 20:23:59.905391 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-ccd97cd7c-w9x9r" Feb 17 20:24:00 crc kubenswrapper[4793]: E0217 20:24:00.540221 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" podUID="6ae6b399-39bc-446a-af92-4d4b7fc18361" Feb 17 20:24:00 crc kubenswrapper[4793]: E0217 20:24:00.540440 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" podUID="00b83896-fe8e-49bc-b762-6dfb14777fd7" Feb 17 20:24:00 crc kubenswrapper[4793]: I0217 20:24:00.671206 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dsctq" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="registry-server" containerID="cri-o://aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f" gracePeriod=2 Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.091711 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.198818 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj6ll\" (UniqueName: \"kubernetes.io/projected/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-kube-api-access-rj6ll\") pod \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.198967 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-utilities\") pod \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.199653 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-utilities" (OuterVolumeSpecName: "utilities") pod "65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" (UID: "65c0df33-e50d-4c5d-ad6e-cb9280c6ae96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.199795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-catalog-content\") pod \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\" (UID: \"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96\") " Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.203606 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.219838 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-kube-api-access-rj6ll" (OuterVolumeSpecName: "kube-api-access-rj6ll") pod "65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" (UID: "65c0df33-e50d-4c5d-ad6e-cb9280c6ae96"). InnerVolumeSpecName "kube-api-access-rj6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.305233 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj6ll\" (UniqueName: \"kubernetes.io/projected/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-kube-api-access-rj6ll\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.319102 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" (UID: "65c0df33-e50d-4c5d-ad6e-cb9280c6ae96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.407020 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.679932 4793 generic.go:334] "Generic (PLEG): container finished" podID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerID="aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f" exitCode=0 Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.679997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerDied","Data":"aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f"} Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.680010 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsctq" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.680026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsctq" event={"ID":"65c0df33-e50d-4c5d-ad6e-cb9280c6ae96","Type":"ContainerDied","Data":"ef9c543593bd17874bca8090a3921031eed5f5086f16f5277f54e11e660518c5"} Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.680044 4793 scope.go:117] "RemoveContainer" containerID="aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.700551 4793 scope.go:117] "RemoveContainer" containerID="bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.701133 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dsctq"] Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.706819 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dsctq"] Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.716747 4793 scope.go:117] "RemoveContainer" containerID="f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.739703 4793 scope.go:117] "RemoveContainer" containerID="aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f" Feb 17 20:24:01 crc kubenswrapper[4793]: E0217 20:24:01.740189 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f\": container with ID starting with aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f not found: ID does not exist" containerID="aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.740229 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f"} err="failed to get container status \"aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f\": rpc error: code = NotFound desc = could not find container \"aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f\": container with ID starting with aa0694b11822d10af447f775c22060edadab23551fcd6e50ab44f4d04004fe7f not found: ID does not exist" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.740257 4793 scope.go:117] "RemoveContainer" containerID="bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2" Feb 17 20:24:01 crc kubenswrapper[4793]: E0217 20:24:01.740576 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2\": container with ID starting with bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2 not found: ID does not exist" containerID="bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.740620 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2"} err="failed to get container status \"bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2\": rpc error: code = NotFound desc = could not find container \"bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2\": container with ID starting with bd4f1949008cf2bb5e17c94bb4ccbc94332d915d2f6ed736adf37cb8ac6d81b2 not found: ID does not exist" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.740653 4793 scope.go:117] "RemoveContainer" containerID="f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5" Feb 17 20:24:01 crc kubenswrapper[4793]: E0217 20:24:01.740945 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5\": container with ID starting with f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5 not found: ID does not exist" containerID="f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5" Feb 17 20:24:01 crc kubenswrapper[4793]: I0217 20:24:01.740974 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5"} err="failed to get container status \"f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5\": rpc error: code = NotFound desc = could not find container \"f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5\": container with ID starting with f70c6d052c98896000a760ded5229374ccb0a376850ffeec227f4df4d4d1f6f5 not found: ID does not exist" Feb 17 20:24:02 crc kubenswrapper[4793]: I0217 20:24:02.139458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:24:03 crc kubenswrapper[4793]: I0217 20:24:03.550375 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" path="/var/lib/kubelet/pods/65c0df33-e50d-4c5d-ad6e-cb9280c6ae96/volumes" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.388043 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhgpn"] Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.388290 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zhgpn" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="registry-server" containerID="cri-o://12fb1677e168173631f4cf9d4b1a1c0abbbd620c3150e5cef42b0078aff0c559" gracePeriod=2 Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.714223 4793 generic.go:334] "Generic (PLEG): container finished" podID="52552d30-68bc-4e2c-82d0-2900e4834680" containerID="12fb1677e168173631f4cf9d4b1a1c0abbbd620c3150e5cef42b0078aff0c559" exitCode=0 Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.714266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerDied","Data":"12fb1677e168173631f4cf9d4b1a1c0abbbd620c3150e5cef42b0078aff0c559"} Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.714663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhgpn" event={"ID":"52552d30-68bc-4e2c-82d0-2900e4834680","Type":"ContainerDied","Data":"e35332e0822b885559c03bad13f59482fee90f1c1a91990cf50af4dde312c6c1"} Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.714756 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35332e0822b885559c03bad13f59482fee90f1c1a91990cf50af4dde312c6c1" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.755183 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.856546 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-utilities\") pod \"52552d30-68bc-4e2c-82d0-2900e4834680\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.856616 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9kgj\" (UniqueName: \"kubernetes.io/projected/52552d30-68bc-4e2c-82d0-2900e4834680-kube-api-access-n9kgj\") pod \"52552d30-68bc-4e2c-82d0-2900e4834680\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.857676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-catalog-content\") pod \"52552d30-68bc-4e2c-82d0-2900e4834680\" (UID: \"52552d30-68bc-4e2c-82d0-2900e4834680\") " Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.858084 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-utilities" (OuterVolumeSpecName: "utilities") pod "52552d30-68bc-4e2c-82d0-2900e4834680" (UID: "52552d30-68bc-4e2c-82d0-2900e4834680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.859375 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.867021 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52552d30-68bc-4e2c-82d0-2900e4834680-kube-api-access-n9kgj" (OuterVolumeSpecName: "kube-api-access-n9kgj") pod "52552d30-68bc-4e2c-82d0-2900e4834680" (UID: "52552d30-68bc-4e2c-82d0-2900e4834680"). InnerVolumeSpecName "kube-api-access-n9kgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.886134 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52552d30-68bc-4e2c-82d0-2900e4834680" (UID: "52552d30-68bc-4e2c-82d0-2900e4834680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.960658 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52552d30-68bc-4e2c-82d0-2900e4834680-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:04 crc kubenswrapper[4793]: I0217 20:24:04.960744 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9kgj\" (UniqueName: \"kubernetes.io/projected/52552d30-68bc-4e2c-82d0-2900e4834680-kube-api-access-n9kgj\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:05 crc kubenswrapper[4793]: I0217 20:24:05.735668 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhgpn" Feb 17 20:24:05 crc kubenswrapper[4793]: I0217 20:24:05.770973 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhgpn"] Feb 17 20:24:05 crc kubenswrapper[4793]: I0217 20:24:05.780217 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhgpn"] Feb 17 20:24:06 crc kubenswrapper[4793]: I0217 20:24:06.386103 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:24:07 crc kubenswrapper[4793]: I0217 20:24:07.547174 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" path="/var/lib/kubelet/pods/52552d30-68bc-4e2c-82d0-2900e4834680/volumes" Feb 17 20:24:07 crc kubenswrapper[4793]: I0217 20:24:07.979170 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4f4bj"] Feb 17 20:24:07 crc kubenswrapper[4793]: I0217 20:24:07.979389 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4f4bj" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="registry-server" containerID="cri-o://22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350" gracePeriod=2 Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.389871 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.507913 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-catalog-content\") pod \"95440bb7-98fc-47c0-8da1-1363773a2503\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.508044 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l966n\" (UniqueName: \"kubernetes.io/projected/95440bb7-98fc-47c0-8da1-1363773a2503-kube-api-access-l966n\") pod \"95440bb7-98fc-47c0-8da1-1363773a2503\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.508091 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-utilities\") pod \"95440bb7-98fc-47c0-8da1-1363773a2503\" (UID: \"95440bb7-98fc-47c0-8da1-1363773a2503\") " Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.508970 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-utilities" (OuterVolumeSpecName: "utilities") pod "95440bb7-98fc-47c0-8da1-1363773a2503" (UID: "95440bb7-98fc-47c0-8da1-1363773a2503"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.512509 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95440bb7-98fc-47c0-8da1-1363773a2503-kube-api-access-l966n" (OuterVolumeSpecName: "kube-api-access-l966n") pod "95440bb7-98fc-47c0-8da1-1363773a2503" (UID: "95440bb7-98fc-47c0-8da1-1363773a2503"). InnerVolumeSpecName "kube-api-access-l966n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.557312 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95440bb7-98fc-47c0-8da1-1363773a2503" (UID: "95440bb7-98fc-47c0-8da1-1363773a2503"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.609832 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.609862 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95440bb7-98fc-47c0-8da1-1363773a2503-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.609873 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l966n\" (UniqueName: \"kubernetes.io/projected/95440bb7-98fc-47c0-8da1-1363773a2503-kube-api-access-l966n\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.760252 4793 generic.go:334] "Generic (PLEG): container finished" podID="95440bb7-98fc-47c0-8da1-1363773a2503" containerID="22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350" exitCode=0 Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.760307 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerDied","Data":"22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350"} Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.760321 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4f4bj" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.760345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4f4bj" event={"ID":"95440bb7-98fc-47c0-8da1-1363773a2503","Type":"ContainerDied","Data":"8f589067c70e251174fb58f30b6887e05b23af084644b4c922f217f27266a9f6"} Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.760388 4793 scope.go:117] "RemoveContainer" containerID="22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.789102 4793 scope.go:117] "RemoveContainer" containerID="ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.801865 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4f4bj"] Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.815514 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4f4bj"] Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.818978 4793 scope.go:117] "RemoveContainer" containerID="c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.848827 4793 scope.go:117] "RemoveContainer" containerID="22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350" Feb 17 20:24:08 crc kubenswrapper[4793]: E0217 20:24:08.849307 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350\": container with ID starting with 22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350 not found: ID does not exist" containerID="22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.849350 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350"} err="failed to get container status \"22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350\": rpc error: code = NotFound desc = could not find container \"22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350\": container with ID starting with 22859498c2fd768174e31866a97cd54174f2216dfea004d88d6c13ab6a6ed350 not found: ID does not exist" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.849392 4793 scope.go:117] "RemoveContainer" containerID="ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764" Feb 17 20:24:08 crc kubenswrapper[4793]: E0217 20:24:08.849681 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764\": container with ID starting with ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764 not found: ID does not exist" containerID="ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.849747 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764"} err="failed to get container status \"ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764\": rpc error: code = NotFound desc = could not find container \"ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764\": container with ID starting with ceb153851ac5e150a8f714f2c880198aef1ed28c8382d6dba61edca6fd3f4764 not found: ID does not exist" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.849772 4793 scope.go:117] "RemoveContainer" containerID="c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61" Feb 17 20:24:08 crc kubenswrapper[4793]: E0217 20:24:08.850015 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61\": container with ID starting with c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61 not found: ID does not exist" containerID="c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61" Feb 17 20:24:08 crc kubenswrapper[4793]: I0217 20:24:08.850054 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61"} err="failed to get container status \"c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61\": rpc error: code = NotFound desc = could not find container \"c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61\": container with ID starting with c252d2a5ac2c1e64ed9fae61bc7acb1903e5d6024c5f05fd00d3afc1dbb88a61 not found: ID does not exist" Feb 17 20:24:09 crc kubenswrapper[4793]: I0217 20:24:09.550966 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" path="/var/lib/kubelet/pods/95440bb7-98fc-47c0-8da1-1363773a2503/volumes" Feb 17 20:24:12 crc kubenswrapper[4793]: I0217 20:24:12.789315 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" event={"ID":"00b83896-fe8e-49bc-b762-6dfb14777fd7","Type":"ContainerStarted","Data":"5bba7c1e58555a25434f5c59265139d6365e07214efe0096193b15cd9bc2c554"} Feb 17 20:24:12 crc kubenswrapper[4793]: I0217 20:24:12.790155 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:24:12 crc kubenswrapper[4793]: I0217 20:24:12.811663 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" podStartSLOduration=2.6806677 podStartE2EDuration="55.811646015s" podCreationTimestamp="2026-02-17 20:23:17 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.954209308 +0000 UTC m=+874.245907619" lastFinishedPulling="2026-02-17 20:24:12.085187633 +0000 UTC m=+927.376885934" observedRunningTime="2026-02-17 20:24:12.806718973 +0000 UTC m=+928.098417284" watchObservedRunningTime="2026-02-17 20:24:12.811646015 +0000 UTC m=+928.103344316" Feb 17 20:24:13 crc kubenswrapper[4793]: I0217 20:24:13.797861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" event={"ID":"df82a1fe-a387-45c4-bad1-d2fca982baaa","Type":"ContainerStarted","Data":"d92eba36883b801a6279ca581a9d18dfebc3207b8b67871f9f3d51d1d8a02411"} Feb 17 20:24:13 crc kubenswrapper[4793]: I0217 20:24:13.798162 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:24:13 crc kubenswrapper[4793]: I0217 20:24:13.799207 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" event={"ID":"6ae6b399-39bc-446a-af92-4d4b7fc18361","Type":"ContainerStarted","Data":"523aaa82f8e900e1c448213e28908c077b3790fd85a2ec032a1cc72c3dc2e020"} Feb 17 20:24:13 crc kubenswrapper[4793]: I0217 20:24:13.799425 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:24:13 crc kubenswrapper[4793]: I0217 20:24:13.820435 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" podStartSLOduration=3.804855035 podStartE2EDuration="57.820418832s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.931816671 +0000 UTC m=+874.223514982" lastFinishedPulling="2026-02-17 20:24:12.947380448 +0000 UTC m=+928.239078779" observedRunningTime="2026-02-17 20:24:13.814577756 +0000 UTC m=+929.106276067" watchObservedRunningTime="2026-02-17 20:24:13.820418832 +0000 UTC m=+929.112117143" Feb 17 20:24:13 crc kubenswrapper[4793]: I0217 20:24:13.832072 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" podStartSLOduration=3.769463886 podStartE2EDuration="57.832053271s" podCreationTimestamp="2026-02-17 20:23:16 +0000 UTC" firstStartedPulling="2026-02-17 20:23:18.954539846 +0000 UTC m=+874.246238157" lastFinishedPulling="2026-02-17 20:24:13.017129231 +0000 UTC m=+928.308827542" observedRunningTime="2026-02-17 20:24:13.827782115 +0000 UTC m=+929.119480456" watchObservedRunningTime="2026-02-17 20:24:13.832053271 +0000 UTC m=+929.123751592" Feb 17 20:24:17 crc kubenswrapper[4793]: I0217 20:24:17.637542 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-4v2ch" Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.102302 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.102616 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.102667 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.103417 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8233173cc6f0085dcde5889ba083171fcabff8c918ff96d6ffa28816106b888f"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.103478 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://8233173cc6f0085dcde5889ba083171fcabff8c918ff96d6ffa28816106b888f" gracePeriod=600 Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.858125 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="8233173cc6f0085dcde5889ba083171fcabff8c918ff96d6ffa28816106b888f" exitCode=0 Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.858208 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"8233173cc6f0085dcde5889ba083171fcabff8c918ff96d6ffa28816106b888f"} Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.858547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"cc36425746166cdc0b95e87a928d74d18e09bb391b29761173d2714eb0234de5"} Feb 17 20:24:20 crc kubenswrapper[4793]: I0217 20:24:20.858586 4793 scope.go:117] "RemoveContainer" containerID="c6dd2a040783fcbbb25effeb44da963123b5a74b9553a93ac26a3ede471dffb7" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.805083 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzswg"] Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806041 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="extract-content" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806061 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="extract-content" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806080 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806087 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806097 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="extract-content" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806121 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="extract-content" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806135 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806143 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806154 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806162 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806174 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="extract-content" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806180 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="extract-content" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806191 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="extract-utilities" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806197 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="extract-utilities" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806212 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="extract-utilities" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806219 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="extract-utilities" Feb 17 20:24:26 crc kubenswrapper[4793]: E0217 20:24:26.806232 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="extract-utilities" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806239 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="extract-utilities" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806409 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="95440bb7-98fc-47c0-8da1-1363773a2503" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806427 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="52552d30-68bc-4e2c-82d0-2900e4834680" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.806445 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c0df33-e50d-4c5d-ad6e-cb9280c6ae96" containerName="registry-server" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.807735 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.828260 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzswg"] Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.873650 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxtn\" (UniqueName: \"kubernetes.io/projected/ac716e14-5443-4fc6-8886-65b96643a10f-kube-api-access-vtxtn\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.873705 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-utilities\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.873741 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-catalog-content\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.974498 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxtn\" (UniqueName: \"kubernetes.io/projected/ac716e14-5443-4fc6-8886-65b96643a10f-kube-api-access-vtxtn\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.974545 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-utilities\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.974592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-catalog-content\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.975099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-catalog-content\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:26 crc kubenswrapper[4793]: I0217 20:24:26.975219 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-utilities\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.003133 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxtn\" (UniqueName: \"kubernetes.io/projected/ac716e14-5443-4fc6-8886-65b96643a10f-kube-api-access-vtxtn\") pod \"certified-operators-kzswg\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.125602 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.373556 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-qjv29" Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.464613 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jrjzp" Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.716425 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzswg"] Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.922483 4793 generic.go:334] "Generic (PLEG): container finished" podID="ac716e14-5443-4fc6-8886-65b96643a10f" containerID="6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303" exitCode=0 Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.922533 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerDied","Data":"6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303"} Feb 17 20:24:27 crc kubenswrapper[4793]: I0217 20:24:27.922761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerStarted","Data":"1d36e54c088c0381c313ad6f3241ea44715935e02fd33a1bf84999c0b5503ffb"} Feb 17 20:24:28 crc kubenswrapper[4793]: I0217 20:24:28.932504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerStarted","Data":"e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795"} Feb 17 20:24:29 crc kubenswrapper[4793]: I0217 20:24:29.943758 4793 generic.go:334] "Generic (PLEG): container finished" podID="ac716e14-5443-4fc6-8886-65b96643a10f" containerID="e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795" exitCode=0 Feb 17 20:24:29 crc kubenswrapper[4793]: I0217 20:24:29.943980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerDied","Data":"e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795"} Feb 17 20:24:30 crc kubenswrapper[4793]: I0217 20:24:30.954015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerStarted","Data":"0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb"} Feb 17 20:24:30 crc kubenswrapper[4793]: I0217 20:24:30.980240 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzswg" podStartSLOduration=2.585136392 podStartE2EDuration="4.980223878s" podCreationTimestamp="2026-02-17 20:24:26 +0000 UTC" firstStartedPulling="2026-02-17 20:24:27.923653925 +0000 UTC m=+943.215352236" lastFinishedPulling="2026-02-17 20:24:30.318741391 +0000 UTC m=+945.610439722" observedRunningTime="2026-02-17 20:24:30.971536502 +0000 UTC m=+946.263234833" watchObservedRunningTime="2026-02-17 20:24:30.980223878 +0000 UTC m=+946.271922209" Feb 17 20:24:37 crc kubenswrapper[4793]: I0217 20:24:37.125902 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:37 crc kubenswrapper[4793]: I0217 20:24:37.126174 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:37 crc kubenswrapper[4793]: I0217 20:24:37.163428 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:38 crc kubenswrapper[4793]: I0217 20:24:38.092660 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:38 crc kubenswrapper[4793]: I0217 20:24:38.138375 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzswg"] Feb 17 20:24:40 crc kubenswrapper[4793]: I0217 20:24:40.052038 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kzswg" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="registry-server" containerID="cri-o://0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb" gracePeriod=2 Feb 17 20:24:40 crc kubenswrapper[4793]: I0217 20:24:40.966573 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.061137 4793 generic.go:334] "Generic (PLEG): container finished" podID="ac716e14-5443-4fc6-8886-65b96643a10f" containerID="0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb" exitCode=0 Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.061186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerDied","Data":"0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb"} Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.061217 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzswg" event={"ID":"ac716e14-5443-4fc6-8886-65b96643a10f","Type":"ContainerDied","Data":"1d36e54c088c0381c313ad6f3241ea44715935e02fd33a1bf84999c0b5503ffb"} Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.061213 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzswg" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.061234 4793 scope.go:117] "RemoveContainer" containerID="0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.071280 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-utilities\") pod \"ac716e14-5443-4fc6-8886-65b96643a10f\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.071328 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-catalog-content\") pod \"ac716e14-5443-4fc6-8886-65b96643a10f\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.071370 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxtn\" (UniqueName: \"kubernetes.io/projected/ac716e14-5443-4fc6-8886-65b96643a10f-kube-api-access-vtxtn\") pod \"ac716e14-5443-4fc6-8886-65b96643a10f\" (UID: \"ac716e14-5443-4fc6-8886-65b96643a10f\") " Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.072348 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-utilities" (OuterVolumeSpecName: "utilities") pod "ac716e14-5443-4fc6-8886-65b96643a10f" (UID: "ac716e14-5443-4fc6-8886-65b96643a10f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.076483 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac716e14-5443-4fc6-8886-65b96643a10f-kube-api-access-vtxtn" (OuterVolumeSpecName: "kube-api-access-vtxtn") pod "ac716e14-5443-4fc6-8886-65b96643a10f" (UID: "ac716e14-5443-4fc6-8886-65b96643a10f"). InnerVolumeSpecName "kube-api-access-vtxtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.104224 4793 scope.go:117] "RemoveContainer" containerID="e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.120091 4793 scope.go:117] "RemoveContainer" containerID="6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.143022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac716e14-5443-4fc6-8886-65b96643a10f" (UID: "ac716e14-5443-4fc6-8886-65b96643a10f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.149437 4793 scope.go:117] "RemoveContainer" containerID="0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb" Feb 17 20:24:41 crc kubenswrapper[4793]: E0217 20:24:41.149903 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb\": container with ID starting with 0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb not found: ID does not exist" containerID="0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.149948 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb"} err="failed to get container status \"0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb\": rpc error: code = NotFound desc = could not find container \"0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb\": container with ID starting with 0a2e3e5dfcf2e006cec339235ad716ed22b61bdfc35df887ecf8bffc7be942cb not found: ID does not exist" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.149970 4793 scope.go:117] "RemoveContainer" containerID="e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795" Feb 17 20:24:41 crc kubenswrapper[4793]: E0217 20:24:41.150446 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795\": container with ID starting with e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795 not found: ID does not exist" containerID="e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.150524 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795"} err="failed to get container status \"e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795\": rpc error: code = NotFound desc = could not find container \"e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795\": container with ID starting with e19afe92426459351d234862f85d6e4961a426794c2e63f33443083227607795 not found: ID does not exist" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.150570 4793 scope.go:117] "RemoveContainer" containerID="6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303" Feb 17 20:24:41 crc kubenswrapper[4793]: E0217 20:24:41.151157 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303\": container with ID starting with 6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303 not found: ID does not exist" containerID="6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.151216 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303"} err="failed to get container status \"6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303\": rpc error: code = NotFound desc = could not find container \"6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303\": container with ID starting with 6003dd6908a6378140f2e95c8bcdca0788a8ea2bbb38a6943f8ebfa3d04f7303 not found: ID does not exist" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.173355 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.173384 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac716e14-5443-4fc6-8886-65b96643a10f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.173394 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxtn\" (UniqueName: \"kubernetes.io/projected/ac716e14-5443-4fc6-8886-65b96643a10f-kube-api-access-vtxtn\") on node \"crc\" DevicePath \"\"" Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.402667 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzswg"] Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.406447 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kzswg"] Feb 17 20:24:41 crc kubenswrapper[4793]: I0217 20:24:41.551577 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" path="/var/lib/kubelet/pods/ac716e14-5443-4fc6-8886-65b96643a10f/volumes" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.550542 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-kmwt8"] Feb 17 20:24:45 crc kubenswrapper[4793]: E0217 20:24:45.551785 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="extract-utilities" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.551802 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="extract-utilities" Feb 17 20:24:45 crc kubenswrapper[4793]: E0217 20:24:45.551838 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="registry-server" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.551844 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="registry-server" Feb 17 20:24:45 crc kubenswrapper[4793]: E0217 20:24:45.551859 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="extract-content" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.551868 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="extract-content" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.552045 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac716e14-5443-4fc6-8886-65b96643a10f" containerName="registry-server" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.554430 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.563162 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.563202 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2vjxc" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.563425 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.563504 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.564009 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-kmwt8"] Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.626368 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-44t8n"] Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.630205 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.632836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42837205-82b2-4e22-91a1-4ccf8b4d6126-config\") pod \"dnsmasq-dns-78ff9dfd65-kmwt8\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.632882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8md\" (UniqueName: \"kubernetes.io/projected/42837205-82b2-4e22-91a1-4ccf8b4d6126-kube-api-access-vr8md\") pod \"dnsmasq-dns-78ff9dfd65-kmwt8\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.633309 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.638403 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-44t8n"] Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.735158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcz4\" (UniqueName: \"kubernetes.io/projected/0032fa4d-287b-4471-917d-a032208b99e5-kube-api-access-lfcz4\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.735233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-config\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.735365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42837205-82b2-4e22-91a1-4ccf8b4d6126-config\") pod \"dnsmasq-dns-78ff9dfd65-kmwt8\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.735429 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8md\" (UniqueName: \"kubernetes.io/projected/42837205-82b2-4e22-91a1-4ccf8b4d6126-kube-api-access-vr8md\") pod \"dnsmasq-dns-78ff9dfd65-kmwt8\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.735489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.738988 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42837205-82b2-4e22-91a1-4ccf8b4d6126-config\") pod \"dnsmasq-dns-78ff9dfd65-kmwt8\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.758747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8md\" (UniqueName: \"kubernetes.io/projected/42837205-82b2-4e22-91a1-4ccf8b4d6126-kube-api-access-vr8md\") pod \"dnsmasq-dns-78ff9dfd65-kmwt8\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.837971 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcz4\" (UniqueName: \"kubernetes.io/projected/0032fa4d-287b-4471-917d-a032208b99e5-kube-api-access-lfcz4\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.838133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-config\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.838279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.839078 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-dns-svc\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.839666 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-config\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.855346 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcz4\" (UniqueName: \"kubernetes.io/projected/0032fa4d-287b-4471-917d-a032208b99e5-kube-api-access-lfcz4\") pod \"dnsmasq-dns-574fdb7f99-44t8n\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.889551 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:24:45 crc kubenswrapper[4793]: I0217 20:24:45.987993 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:24:46 crc kubenswrapper[4793]: I0217 20:24:46.326767 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-kmwt8"] Feb 17 20:24:46 crc kubenswrapper[4793]: W0217 20:24:46.334521 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42837205_82b2_4e22_91a1_4ccf8b4d6126.slice/crio-b59f1f3a62d035dfe8491a7a2d4d0c946ef8cad63305dcbcef4487df22e773b9 WatchSource:0}: Error finding container b59f1f3a62d035dfe8491a7a2d4d0c946ef8cad63305dcbcef4487df22e773b9: Status 404 returned error can't find the container with id b59f1f3a62d035dfe8491a7a2d4d0c946ef8cad63305dcbcef4487df22e773b9 Feb 17 20:24:46 crc kubenswrapper[4793]: I0217 20:24:46.451280 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-44t8n"] Feb 17 20:24:46 crc kubenswrapper[4793]: W0217 20:24:46.463168 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0032fa4d_287b_4471_917d_a032208b99e5.slice/crio-04c3e5e7bb2f7127707e8c22be7384dda3ea59c92c2588c8b42c68b25ec1fd42 WatchSource:0}: Error finding container 04c3e5e7bb2f7127707e8c22be7384dda3ea59c92c2588c8b42c68b25ec1fd42: Status 404 returned error can't find the container with id 04c3e5e7bb2f7127707e8c22be7384dda3ea59c92c2588c8b42c68b25ec1fd42 Feb 17 20:24:47 crc kubenswrapper[4793]: I0217 20:24:47.110178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" event={"ID":"0032fa4d-287b-4471-917d-a032208b99e5","Type":"ContainerStarted","Data":"04c3e5e7bb2f7127707e8c22be7384dda3ea59c92c2588c8b42c68b25ec1fd42"} Feb 17 20:24:47 crc kubenswrapper[4793]: I0217 20:24:47.111890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" event={"ID":"42837205-82b2-4e22-91a1-4ccf8b4d6126","Type":"ContainerStarted","Data":"b59f1f3a62d035dfe8491a7a2d4d0c946ef8cad63305dcbcef4487df22e773b9"} Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.111074 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-44t8n"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.137916 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-kc9bm"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.140061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.154555 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-kc9bm"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.203470 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-config\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.203585 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-dns-svc\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.203627 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xzs\" (UniqueName: \"kubernetes.io/projected/c9754f47-8a2f-4878-a69e-051c0a2ec74e-kube-api-access-g7xzs\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.305172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-dns-svc\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.305484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xzs\" (UniqueName: \"kubernetes.io/projected/c9754f47-8a2f-4878-a69e-051c0a2ec74e-kube-api-access-g7xzs\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.305542 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-config\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.306234 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-dns-svc\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.310081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-config\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.353004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xzs\" (UniqueName: \"kubernetes.io/projected/c9754f47-8a2f-4878-a69e-051c0a2ec74e-kube-api-access-g7xzs\") pod \"dnsmasq-dns-6f49bcf4c9-kc9bm\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.435821 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-kmwt8"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.446253 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-q4pn2"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.447640 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.461157 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.467041 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-q4pn2"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.507210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk5m\" (UniqueName: \"kubernetes.io/projected/db7eeec4-9d65-4323-a812-9719a53c5d9e-kube-api-access-prk5m\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.507293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-config\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.507313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-dns-svc\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.610238 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prk5m\" (UniqueName: \"kubernetes.io/projected/db7eeec4-9d65-4323-a812-9719a53c5d9e-kube-api-access-prk5m\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.610629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-config\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.610654 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-dns-svc\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.611492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-dns-svc\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.611623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-config\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.638120 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prk5m\" (UniqueName: \"kubernetes.io/projected/db7eeec4-9d65-4323-a812-9719a53c5d9e-kube-api-access-prk5m\") pod \"dnsmasq-dns-5877d9b675-q4pn2\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.774221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.825836 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-q4pn2"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.876730 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66f76cf86f-jkskm"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.877835 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.893832 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f76cf86f-jkskm"] Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.915711 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4crp\" (UniqueName: \"kubernetes.io/projected/ed1e6971-9bbf-42c0-8a55-7d508936e963-kube-api-access-x4crp\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.915765 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-dns-svc\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:49 crc kubenswrapper[4793]: I0217 20:24:49.915789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-config\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.019100 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4crp\" (UniqueName: \"kubernetes.io/projected/ed1e6971-9bbf-42c0-8a55-7d508936e963-kube-api-access-x4crp\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.019375 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-dns-svc\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.019397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-config\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.020231 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-config\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.020907 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-dns-svc\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.040951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4crp\" (UniqueName: \"kubernetes.io/projected/ed1e6971-9bbf-42c0-8a55-7d508936e963-kube-api-access-x4crp\") pod \"dnsmasq-dns-66f76cf86f-jkskm\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.078571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-kc9bm"] Feb 17 20:24:50 crc kubenswrapper[4793]: W0217 20:24:50.095784 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9754f47_8a2f_4878_a69e_051c0a2ec74e.slice/crio-9ac74fd7483a695fb616c4e4c9e0e3b7be945826456bb1c1d5331bf3b7b15488 WatchSource:0}: Error finding container 9ac74fd7483a695fb616c4e4c9e0e3b7be945826456bb1c1d5331bf3b7b15488: Status 404 returned error can't find the container with id 9ac74fd7483a695fb616c4e4c9e0e3b7be945826456bb1c1d5331bf3b7b15488 Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.140605 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" event={"ID":"c9754f47-8a2f-4878-a69e-051c0a2ec74e","Type":"ContainerStarted","Data":"9ac74fd7483a695fb616c4e4c9e0e3b7be945826456bb1c1d5331bf3b7b15488"} Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.227749 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.300002 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.301585 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.303745 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.303791 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jx5bq" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.303937 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.304037 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.304211 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.304220 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.305884 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.309791 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.371152 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-q4pn2"] Feb 17 20:24:50 crc kubenswrapper[4793]: W0217 20:24:50.380280 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb7eeec4_9d65_4323_a812_9719a53c5d9e.slice/crio-2433e758fbb1e22b9ba341c7e90a408c9c9bace17053e07057a71b351934445a WatchSource:0}: Error finding container 2433e758fbb1e22b9ba341c7e90a408c9c9bace17053e07057a71b351934445a: Status 404 returned error can't find the container with id 2433e758fbb1e22b9ba341c7e90a408c9c9bace17053e07057a71b351934445a Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424744 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424805 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424828 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424910 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74e2a040-552e-4736-986f-2abac7315e6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424945 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.424970 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjknx\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-kube-api-access-rjknx\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.425011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.425029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74e2a040-552e-4736-986f-2abac7315e6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.425063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.526648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527019 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74e2a040-552e-4736-986f-2abac7315e6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527128 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527150 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527174 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjknx\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-kube-api-access-rjknx\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.527236 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74e2a040-552e-4736-986f-2abac7315e6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.528004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.531496 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.531836 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.532114 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.532233 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74e2a040-552e-4736-986f-2abac7315e6a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.532334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-config-data\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.532856 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74e2a040-552e-4736-986f-2abac7315e6a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.535530 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.539084 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.552145 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.553561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.573303 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjknx\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-kube-api-access-rjknx\") pod \"rabbitmq-server-0\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.595480 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.596548 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.600461 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.601090 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.601294 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-44658" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.601420 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.601715 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.601895 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.602081 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.609766 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.629043 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.729523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.729602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.730335 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.730408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eaaf278-e1ca-4fbe-ab46-478d8846293d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.730899 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.731132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.731217 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tp2\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-kube-api-access-54tp2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.731608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.731694 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.731741 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.731818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eaaf278-e1ca-4fbe-ab46-478d8846293d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.820738 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f76cf86f-jkskm"] Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.833785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.835491 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.835660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eaaf278-e1ca-4fbe-ab46-478d8846293d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.835916 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836044 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tp2\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-kube-api-access-54tp2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836403 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836624 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.836974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eaaf278-e1ca-4fbe-ab46-478d8846293d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.837079 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.837834 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.837912 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.840499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.840769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.842737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.842845 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.845099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eaaf278-e1ca-4fbe-ab46-478d8846293d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.845313 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.848708 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eaaf278-e1ca-4fbe-ab46-478d8846293d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.854654 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tp2\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-kube-api-access-54tp2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.869096 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:50 crc kubenswrapper[4793]: I0217 20:24:50.914590 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.075950 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.078059 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.090544 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.090972 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.091119 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.091307 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.095985 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-cnf8w" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.096285 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.096469 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.112463 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.162135 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.176181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" event={"ID":"ed1e6971-9bbf-42c0-8a55-7d508936e963","Type":"ContainerStarted","Data":"aa8f664346b728d4bd30f968d08e1f6b5d9f2b1b9397f36f707acd16b3ea70ec"} Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.181067 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" event={"ID":"db7eeec4-9d65-4323-a812-9719a53c5d9e","Type":"ContainerStarted","Data":"2433e758fbb1e22b9ba341c7e90a408c9c9bace17053e07057a71b351934445a"} Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246019 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246065 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnmmm\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-kube-api-access-pnmmm\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246094 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246113 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246132 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246156 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246176 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246190 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246243 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246267 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.246283 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347821 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347891 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347958 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnmmm\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-kube-api-access-pnmmm\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.347999 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.348018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.348041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.348060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.348078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.348948 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.349386 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.349610 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.351000 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.351555 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.353002 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.353245 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.359573 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.360256 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.371401 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnmmm\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-kube-api-access-pnmmm\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.373588 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.375885 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d868632-904a-4ba2-8d3a-4e3d0d8de4b0-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0\") " pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.435658 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:24:51 crc kubenswrapper[4793]: I0217 20:24:51.471887 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.192343 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74e2a040-552e-4736-986f-2abac7315e6a","Type":"ContainerStarted","Data":"f74f2a617ffe0673ac814a61998a9c8fd217a231866f10b2e80a00d1c20c87c5"} Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.400252 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.401839 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.404066 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.404185 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-klvwz" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.407849 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.408248 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.413401 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.446074 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564420 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564463 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564541 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564586 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.564608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmft\" (UniqueName: \"kubernetes.io/projected/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-kube-api-access-fvmft\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.665850 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.665890 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.665948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.665971 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmft\" (UniqueName: \"kubernetes.io/projected/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-kube-api-access-fvmft\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.665995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.666010 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.666059 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.666296 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.666077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.668081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-config-data-default\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.668464 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.668584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.670607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-kolla-config\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.678384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.678706 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.699519 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.700182 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmft\" (UniqueName: \"kubernetes.io/projected/3e7eee19-fd63-4ae8-96d9-9fcd17718b6f-kube-api-access-fvmft\") pod \"openstack-galera-0\" (UID: \"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f\") " pod="openstack/openstack-galera-0" Feb 17 20:24:52 crc kubenswrapper[4793]: I0217 20:24:52.732109 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.731412 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.733008 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.734633 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zzt5n" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.736037 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.736048 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.736197 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.745239 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885187 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885232 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrznd\" (UniqueName: \"kubernetes.io/projected/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-kube-api-access-nrznd\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885259 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885277 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885472 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885616 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.885659 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.895126 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.896013 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.900362 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.900629 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qh27z" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.900771 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.905788 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987496 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987541 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c301ad-155e-4276-83c4-9f17f530d792-config-data\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrznd\" (UniqueName: \"kubernetes.io/projected/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-kube-api-access-nrznd\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987617 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf666\" (UniqueName: \"kubernetes.io/projected/80c301ad-155e-4276-83c4-9f17f530d792-kube-api-access-vf666\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987706 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80c301ad-155e-4276-83c4-9f17f530d792-kolla-config\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987732 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/80c301ad-155e-4276-83c4-9f17f530d792-memcached-tls-certs\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987765 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.987812 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c301ad-155e-4276-83c4-9f17f530d792-combined-ca-bundle\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.988178 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.988409 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.988781 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.989038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:53 crc kubenswrapper[4793]: I0217 20:24:53.989658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:53.993039 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.010142 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.013788 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.014316 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrznd\" (UniqueName: \"kubernetes.io/projected/3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd-kube-api-access-nrznd\") pod \"openstack-cell1-galera-0\" (UID: \"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd\") " pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.058625 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.089071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80c301ad-155e-4276-83c4-9f17f530d792-kolla-config\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.089124 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/80c301ad-155e-4276-83c4-9f17f530d792-memcached-tls-certs\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.089157 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c301ad-155e-4276-83c4-9f17f530d792-combined-ca-bundle\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.089209 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c301ad-155e-4276-83c4-9f17f530d792-config-data\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.089247 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf666\" (UniqueName: \"kubernetes.io/projected/80c301ad-155e-4276-83c4-9f17f530d792-kube-api-access-vf666\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.093797 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80c301ad-155e-4276-83c4-9f17f530d792-kolla-config\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.095288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80c301ad-155e-4276-83c4-9f17f530d792-config-data\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.098464 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c301ad-155e-4276-83c4-9f17f530d792-combined-ca-bundle\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.104977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf666\" (UniqueName: \"kubernetes.io/projected/80c301ad-155e-4276-83c4-9f17f530d792-kube-api-access-vf666\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.118507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/80c301ad-155e-4276-83c4-9f17f530d792-memcached-tls-certs\") pod \"memcached-0\" (UID: \"80c301ad-155e-4276-83c4-9f17f530d792\") " pod="openstack/memcached-0" Feb 17 20:24:54 crc kubenswrapper[4793]: I0217 20:24:54.218863 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.251052 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.252741 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.258751 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mc8z8" Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.259007 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.335980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ksq\" (UniqueName: \"kubernetes.io/projected/abe64ae2-15f9-402d-984d-ea8f94bd480f-kube-api-access-49ksq\") pod \"kube-state-metrics-0\" (UID: \"abe64ae2-15f9-402d-984d-ea8f94bd480f\") " pod="openstack/kube-state-metrics-0" Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.437483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ksq\" (UniqueName: \"kubernetes.io/projected/abe64ae2-15f9-402d-984d-ea8f94bd480f-kube-api-access-49ksq\") pod \"kube-state-metrics-0\" (UID: \"abe64ae2-15f9-402d-984d-ea8f94bd480f\") " pod="openstack/kube-state-metrics-0" Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.461892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ksq\" (UniqueName: \"kubernetes.io/projected/abe64ae2-15f9-402d-984d-ea8f94bd480f-kube-api-access-49ksq\") pod \"kube-state-metrics-0\" (UID: \"abe64ae2-15f9-402d-984d-ea8f94bd480f\") " pod="openstack/kube-state-metrics-0" Feb 17 20:24:56 crc kubenswrapper[4793]: I0217 20:24:56.594161 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.573219 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.580459 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.586130 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.586537 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.586704 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8pbn7" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.586941 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.587763 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.587780 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.588061 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.589941 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.596185 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.658697 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.658767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gclb\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-kube-api-access-4gclb\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.658808 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.658898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.658944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.659174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.659231 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.659315 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.659380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.659419 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760659 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gclb\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-kube-api-access-4gclb\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760825 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760899 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760972 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.760994 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.762060 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.763713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.764783 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.785537 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.785607 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3174b7bcfd494c95de787fa7079d37ded4941cf895f579caff106d0384cba7de/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.789126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.789379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.789950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.790105 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.791275 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.800491 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gclb\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-kube-api-access-4gclb\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.826499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:57 crc kubenswrapper[4793]: I0217 20:24:57.913153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.093174 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9pnx5"] Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.094669 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.099532 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.099976 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9x55c" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.100226 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.112219 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hz6qr"] Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.114006 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.130867 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9pnx5"] Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.154801 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hz6qr"] Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186098 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gv6\" (UniqueName: \"kubernetes.io/projected/61bbd519-85d9-4466-abe0-e4c6664072c5-kube-api-access-r2gv6\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186174 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61bbd519-85d9-4466-abe0-e4c6664072c5-scripts\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186241 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/330b8b28-b736-42a9-a430-40d75f6ec12d-ovn-controller-tls-certs\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-run\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qr5\" (UniqueName: \"kubernetes.io/projected/330b8b28-b736-42a9-a430-40d75f6ec12d-kube-api-access-74qr5\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186493 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-run-ovn\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186516 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-etc-ovs\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-log-ovn\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186564 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-run\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186611 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-log\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330b8b28-b736-42a9-a430-40d75f6ec12d-scripts\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186731 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b8b28-b736-42a9-a430-40d75f6ec12d-combined-ca-bundle\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.186759 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-lib\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.288364 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b8b28-b736-42a9-a430-40d75f6ec12d-combined-ca-bundle\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.288400 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-lib\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.288431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2gv6\" (UniqueName: \"kubernetes.io/projected/61bbd519-85d9-4466-abe0-e4c6664072c5-kube-api-access-r2gv6\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.288860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-lib\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.288973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61bbd519-85d9-4466-abe0-e4c6664072c5-scripts\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.289008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/330b8b28-b736-42a9-a430-40d75f6ec12d-ovn-controller-tls-certs\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.290768 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-run\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.290992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74qr5\" (UniqueName: \"kubernetes.io/projected/330b8b28-b736-42a9-a430-40d75f6ec12d-kube-api-access-74qr5\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.290701 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61bbd519-85d9-4466-abe0-e4c6664072c5-scripts\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-run-ovn\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.290940 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-run\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291250 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-etc-ovs\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291269 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-log-ovn\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291361 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-run-ovn\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-run\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291525 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-log-ovn\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-etc-ovs\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/330b8b28-b736-42a9-a430-40d75f6ec12d-var-run\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291727 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-log\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330b8b28-b736-42a9-a430-40d75f6ec12d-scripts\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.291877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/61bbd519-85d9-4466-abe0-e4c6664072c5-var-log\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.294365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330b8b28-b736-42a9-a430-40d75f6ec12d-combined-ca-bundle\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.296094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330b8b28-b736-42a9-a430-40d75f6ec12d-scripts\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.299099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/330b8b28-b736-42a9-a430-40d75f6ec12d-ovn-controller-tls-certs\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.311134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2gv6\" (UniqueName: \"kubernetes.io/projected/61bbd519-85d9-4466-abe0-e4c6664072c5-kube-api-access-r2gv6\") pod \"ovn-controller-ovs-hz6qr\" (UID: \"61bbd519-85d9-4466-abe0-e4c6664072c5\") " pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.311191 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74qr5\" (UniqueName: \"kubernetes.io/projected/330b8b28-b736-42a9-a430-40d75f6ec12d-kube-api-access-74qr5\") pod \"ovn-controller-9pnx5\" (UID: \"330b8b28-b736-42a9-a430-40d75f6ec12d\") " pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.420588 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5" Feb 17 20:24:59 crc kubenswrapper[4793]: I0217 20:24:59.439100 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.385963 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.388445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.390859 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.391823 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8dxhb" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.392088 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.392127 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.392178 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.404264 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.510543 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4fw\" (UniqueName: \"kubernetes.io/projected/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-kube-api-access-zn4fw\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.510710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.510784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.510836 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.510870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.511128 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.511212 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.511284 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: W0217 20:25:00.531000 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eaaf278_e1ca_4fbe_ab46_478d8846293d.slice/crio-d365add9301fa453f7f5054af2aa5a761315ea75fff22a5f4ac44b6c18a259d4 WatchSource:0}: Error finding container d365add9301fa453f7f5054af2aa5a761315ea75fff22a5f4ac44b6c18a259d4: Status 404 returned error can't find the container with id d365add9301fa453f7f5054af2aa5a761315ea75fff22a5f4ac44b6c18a259d4 Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613488 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613556 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4fw\" (UniqueName: \"kubernetes.io/projected/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-kube-api-access-zn4fw\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613645 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613719 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613775 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613809 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.613944 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.614199 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.614961 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.615240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.617825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.631421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.632143 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.643469 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4fw\" (UniqueName: \"kubernetes.io/projected/a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd-kube-api-access-zn4fw\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.675277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd\") " pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:00 crc kubenswrapper[4793]: I0217 20:25:00.708211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:01 crc kubenswrapper[4793]: I0217 20:25:01.303268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eaaf278-e1ca-4fbe-ab46-478d8846293d","Type":"ContainerStarted","Data":"d365add9301fa453f7f5054af2aa5a761315ea75fff22a5f4ac44b6c18a259d4"} Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.362402 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.364141 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.374812 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.375345 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.375406 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-b6dhd" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.375429 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.415515 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466513 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3260c410-ab5c-441b-a84d-7b4480d96a17-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466593 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3260c410-ab5c-441b-a84d-7b4480d96a17-config\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466666 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466709 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3260c410-ab5c-441b-a84d-7b4480d96a17-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466735 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466805 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.466849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfwl\" (UniqueName: \"kubernetes.io/projected/3260c410-ab5c-441b-a84d-7b4480d96a17-kube-api-access-dmfwl\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568373 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3260c410-ab5c-441b-a84d-7b4480d96a17-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568459 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568492 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568569 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfwl\" (UniqueName: \"kubernetes.io/projected/3260c410-ab5c-441b-a84d-7b4480d96a17-kube-api-access-dmfwl\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3260c410-ab5c-441b-a84d-7b4480d96a17-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568670 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3260c410-ab5c-441b-a84d-7b4480d96a17-config\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.568816 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.569783 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3260c410-ab5c-441b-a84d-7b4480d96a17-config\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.571621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3260c410-ab5c-441b-a84d-7b4480d96a17-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.573628 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3260c410-ab5c-441b-a84d-7b4480d96a17-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.575636 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.575720 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.583579 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3260c410-ab5c-441b-a84d-7b4480d96a17-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.586975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfwl\" (UniqueName: \"kubernetes.io/projected/3260c410-ab5c-441b-a84d-7b4480d96a17-kube-api-access-dmfwl\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.590104 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3260c410-ab5c-441b-a84d-7b4480d96a17\") " pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:03 crc kubenswrapper[4793]: I0217 20:25:03.685283 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:09 crc kubenswrapper[4793]: I0217 20:25:09.525595 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Feb 17 20:25:14 crc kubenswrapper[4793]: E0217 20:25:14.876175 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 17 20:25:14 crc kubenswrapper[4793]: E0217 20:25:14.876498 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 17 20:25:14 crc kubenswrapper[4793]: E0217 20:25:14.876843 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.80:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjknx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(74e2a040-552e-4736-986f-2abac7315e6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:25:14 crc kubenswrapper[4793]: E0217 20:25:14.878607 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="74e2a040-552e-4736-986f-2abac7315e6a" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.421332 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="74e2a040-552e-4736-986f-2abac7315e6a" Feb 17 20:25:15 crc kubenswrapper[4793]: W0217 20:25:15.541905 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d868632_904a_4ba2_8d3a_4e3d0d8de4b0.slice/crio-65556bde00dd5fd58dd20b03275c7a54dd634b2fe4f6e3e6e2d0b86ff0f374ed WatchSource:0}: Error finding container 65556bde00dd5fd58dd20b03275c7a54dd634b2fe4f6e3e6e2d0b86ff0f374ed: Status 404 returned error can't find the container with id 65556bde00dd5fd58dd20b03275c7a54dd634b2fe4f6e3e6e2d0b86ff0f374ed Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.551811 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.551827 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.551867 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.551893 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.552024 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr8md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78ff9dfd65-kmwt8_openstack(42837205-82b2-4e22-91a1-4ccf8b4d6126): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.551993 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prk5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5877d9b675-q4pn2_openstack(db7eeec4-9d65-4323-a812-9719a53c5d9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.553214 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" podUID="42837205-82b2-4e22-91a1-4ccf8b4d6126" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.553290 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" podUID="db7eeec4-9d65-4323-a812-9719a53c5d9e" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.562110 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.562151 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.562256 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7xzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6f49bcf4c9-kc9bm_openstack(c9754f47-8a2f-4878-a69e-051c0a2ec74e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.563416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" podUID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.592362 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.592418 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.592572 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfcz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-574fdb7f99-44t8n_openstack(0032fa4d-287b-4471-917d-a032208b99e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.593828 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" podUID="0032fa4d-287b-4471-917d-a032208b99e5" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.603069 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.603121 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.603238 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4crp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-66f76cf86f-jkskm_openstack(ed1e6971-9bbf-42c0-8a55-7d508936e963): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:25:15 crc kubenswrapper[4793]: E0217 20:25:15.604462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.088027 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.223027 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9pnx5"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.236836 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.254496 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.358874 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.426531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerStarted","Data":"5f07d0bf7a2d4cf925f9cf9c5e606370eb7af70c8665fb6f82fa6a1b70382248"} Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.432254 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5" event={"ID":"330b8b28-b736-42a9-a430-40d75f6ec12d","Type":"ContainerStarted","Data":"30ecd366d837e54af80569c3dde89725d59181b158ec012c2c4012ec00d76a9b"} Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.435401 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f","Type":"ContainerStarted","Data":"538af457e6ec109007be0ceca33de3254d1f5fd7c9694e1e64170540cc8ec1f5"} Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.436805 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0","Type":"ContainerStarted","Data":"65556bde00dd5fd58dd20b03275c7a54dd634b2fe4f6e3e6e2d0b86ff0f374ed"} Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.437826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"80c301ad-155e-4276-83c4-9f17f530d792","Type":"ContainerStarted","Data":"64af26f3034ff7a7b65de4b2dffbc1fbc93ef8e3eceafe339bc238ddc06b306c"} Feb 17 20:25:16 crc kubenswrapper[4793]: E0217 20:25:16.439560 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" podUID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" Feb 17 20:25:16 crc kubenswrapper[4793]: E0217 20:25:16.439626 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.441880 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: W0217 20:25:16.476759 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d42eaf0_230c_4a64_9c52_5c4ea3ea81dd.slice/crio-f7938a871ecd42aac0af9f5918331f56de616023302970d3ffc02f80bc7c94b6 WatchSource:0}: Error finding container f7938a871ecd42aac0af9f5918331f56de616023302970d3ffc02f80bc7c94b6: Status 404 returned error can't find the container with id f7938a871ecd42aac0af9f5918331f56de616023302970d3ffc02f80bc7c94b6 Feb 17 20:25:16 crc kubenswrapper[4793]: W0217 20:25:16.477356 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c6a46f_8fbc_46f8_9eed_e31ea220c8dd.slice/crio-117dd28b1ae30b1beba70c7df9a53444500e96360d10e4eaea06adcb74b44b0b WatchSource:0}: Error finding container 117dd28b1ae30b1beba70c7df9a53444500e96360d10e4eaea06adcb74b44b0b: Status 404 returned error can't find the container with id 117dd28b1ae30b1beba70c7df9a53444500e96360d10e4eaea06adcb74b44b0b Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.569891 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.613613 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.712790 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hz6qr"] Feb 17 20:25:16 crc kubenswrapper[4793]: I0217 20:25:16.935722 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.019145 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.041343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prk5m\" (UniqueName: \"kubernetes.io/projected/db7eeec4-9d65-4323-a812-9719a53c5d9e-kube-api-access-prk5m\") pod \"db7eeec4-9d65-4323-a812-9719a53c5d9e\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.041399 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-dns-svc\") pod \"db7eeec4-9d65-4323-a812-9719a53c5d9e\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.041459 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcz4\" (UniqueName: \"kubernetes.io/projected/0032fa4d-287b-4471-917d-a032208b99e5-kube-api-access-lfcz4\") pod \"0032fa4d-287b-4471-917d-a032208b99e5\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.041551 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-config\") pod \"0032fa4d-287b-4471-917d-a032208b99e5\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.041588 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-dns-svc\") pod \"0032fa4d-287b-4471-917d-a032208b99e5\" (UID: \"0032fa4d-287b-4471-917d-a032208b99e5\") " Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.041637 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-config\") pod \"db7eeec4-9d65-4323-a812-9719a53c5d9e\" (UID: \"db7eeec4-9d65-4323-a812-9719a53c5d9e\") " Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.042112 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-config" (OuterVolumeSpecName: "config") pod "0032fa4d-287b-4471-917d-a032208b99e5" (UID: "0032fa4d-287b-4471-917d-a032208b99e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.042173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0032fa4d-287b-4471-917d-a032208b99e5" (UID: "0032fa4d-287b-4471-917d-a032208b99e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.042188 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-config" (OuterVolumeSpecName: "config") pod "db7eeec4-9d65-4323-a812-9719a53c5d9e" (UID: "db7eeec4-9d65-4323-a812-9719a53c5d9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.042520 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db7eeec4-9d65-4323-a812-9719a53c5d9e" (UID: "db7eeec4-9d65-4323-a812-9719a53c5d9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.046698 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7eeec4-9d65-4323-a812-9719a53c5d9e-kube-api-access-prk5m" (OuterVolumeSpecName: "kube-api-access-prk5m") pod "db7eeec4-9d65-4323-a812-9719a53c5d9e" (UID: "db7eeec4-9d65-4323-a812-9719a53c5d9e"). InnerVolumeSpecName "kube-api-access-prk5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.048210 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0032fa4d-287b-4471-917d-a032208b99e5-kube-api-access-lfcz4" (OuterVolumeSpecName: "kube-api-access-lfcz4") pod "0032fa4d-287b-4471-917d-a032208b99e5" (UID: "0032fa4d-287b-4471-917d-a032208b99e5"). InnerVolumeSpecName "kube-api-access-lfcz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.143366 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.143396 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prk5m\" (UniqueName: \"kubernetes.io/projected/db7eeec4-9d65-4323-a812-9719a53c5d9e-kube-api-access-prk5m\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.143407 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db7eeec4-9d65-4323-a812-9719a53c5d9e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.143416 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcz4\" (UniqueName: \"kubernetes.io/projected/0032fa4d-287b-4471-917d-a032208b99e5-kube-api-access-lfcz4\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.143428 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.143437 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0032fa4d-287b-4471-917d-a032208b99e5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.461629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hz6qr" event={"ID":"61bbd519-85d9-4466-abe0-e4c6664072c5","Type":"ContainerStarted","Data":"e93d3c13635881fbedf308b5769e939bde4cc6c2ddee4dce190cfb51d21d05e2"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.463428 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3260c410-ab5c-441b-a84d-7b4480d96a17","Type":"ContainerStarted","Data":"262f8089931751c17521fa6246707ff2727e70c75e8740e4739b0824cf57ff8b"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.464756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abe64ae2-15f9-402d-984d-ea8f94bd480f","Type":"ContainerStarted","Data":"a28219df7c591e38f659e59cfdee6650cc33499d4442e0e3f8e9521dfa408e76"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.467086 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd","Type":"ContainerStarted","Data":"f7938a871ecd42aac0af9f5918331f56de616023302970d3ffc02f80bc7c94b6"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.470229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd","Type":"ContainerStarted","Data":"117dd28b1ae30b1beba70c7df9a53444500e96360d10e4eaea06adcb74b44b0b"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.472529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" event={"ID":"0032fa4d-287b-4471-917d-a032208b99e5","Type":"ContainerDied","Data":"04c3e5e7bb2f7127707e8c22be7384dda3ea59c92c2588c8b42c68b25ec1fd42"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.472602 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574fdb7f99-44t8n" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.483511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0","Type":"ContainerStarted","Data":"9b4628138b328b45c35a6bf8ded74846dae7395f2d9f207ec21a7442dd0fefe9"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.485546 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.485561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5877d9b675-q4pn2" event={"ID":"db7eeec4-9d65-4323-a812-9719a53c5d9e","Type":"ContainerDied","Data":"2433e758fbb1e22b9ba341c7e90a408c9c9bace17053e07057a71b351934445a"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.489310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eaaf278-e1ca-4fbe-ab46-478d8846293d","Type":"ContainerStarted","Data":"2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41"} Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.582533 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-44t8n"] Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.592753 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574fdb7f99-44t8n"] Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.631659 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-q4pn2"] Feb 17 20:25:17 crc kubenswrapper[4793]: I0217 20:25:17.639113 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5877d9b675-q4pn2"] Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.448207 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.496251 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.496248 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78ff9dfd65-kmwt8" event={"ID":"42837205-82b2-4e22-91a1-4ccf8b4d6126","Type":"ContainerDied","Data":"b59f1f3a62d035dfe8491a7a2d4d0c946ef8cad63305dcbcef4487df22e773b9"} Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.566584 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr8md\" (UniqueName: \"kubernetes.io/projected/42837205-82b2-4e22-91a1-4ccf8b4d6126-kube-api-access-vr8md\") pod \"42837205-82b2-4e22-91a1-4ccf8b4d6126\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.566716 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42837205-82b2-4e22-91a1-4ccf8b4d6126-config\") pod \"42837205-82b2-4e22-91a1-4ccf8b4d6126\" (UID: \"42837205-82b2-4e22-91a1-4ccf8b4d6126\") " Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.567277 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42837205-82b2-4e22-91a1-4ccf8b4d6126-config" (OuterVolumeSpecName: "config") pod "42837205-82b2-4e22-91a1-4ccf8b4d6126" (UID: "42837205-82b2-4e22-91a1-4ccf8b4d6126"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.567728 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42837205-82b2-4e22-91a1-4ccf8b4d6126-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.576948 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42837205-82b2-4e22-91a1-4ccf8b4d6126-kube-api-access-vr8md" (OuterVolumeSpecName: "kube-api-access-vr8md") pod "42837205-82b2-4e22-91a1-4ccf8b4d6126" (UID: "42837205-82b2-4e22-91a1-4ccf8b4d6126"). InnerVolumeSpecName "kube-api-access-vr8md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.671072 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr8md\" (UniqueName: \"kubernetes.io/projected/42837205-82b2-4e22-91a1-4ccf8b4d6126-kube-api-access-vr8md\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.854422 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-kmwt8"] Feb 17 20:25:18 crc kubenswrapper[4793]: I0217 20:25:18.860261 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78ff9dfd65-kmwt8"] Feb 17 20:25:19 crc kubenswrapper[4793]: I0217 20:25:19.550185 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0032fa4d-287b-4471-917d-a032208b99e5" path="/var/lib/kubelet/pods/0032fa4d-287b-4471-917d-a032208b99e5/volumes" Feb 17 20:25:19 crc kubenswrapper[4793]: I0217 20:25:19.551411 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42837205-82b2-4e22-91a1-4ccf8b4d6126" path="/var/lib/kubelet/pods/42837205-82b2-4e22-91a1-4ccf8b4d6126/volumes" Feb 17 20:25:19 crc kubenswrapper[4793]: I0217 20:25:19.552134 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7eeec4-9d65-4323-a812-9719a53c5d9e" path="/var/lib/kubelet/pods/db7eeec4-9d65-4323-a812-9719a53c5d9e/volumes" Feb 17 20:25:22 crc kubenswrapper[4793]: I0217 20:25:22.530080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"80c301ad-155e-4276-83c4-9f17f530d792","Type":"ContainerStarted","Data":"c8819c44e09cd9d0bb6f419260f6fa4d49b9abc843d8c50ecd765107488e6374"} Feb 17 20:25:22 crc kubenswrapper[4793]: I0217 20:25:22.531011 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 20:25:22 crc kubenswrapper[4793]: I0217 20:25:22.579307 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=24.280534528 podStartE2EDuration="29.579274998s" podCreationTimestamp="2026-02-17 20:24:53 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.110792282 +0000 UTC m=+991.402490593" lastFinishedPulling="2026-02-17 20:25:21.409532752 +0000 UTC m=+996.701231063" observedRunningTime="2026-02-17 20:25:22.55639338 +0000 UTC m=+997.848091771" watchObservedRunningTime="2026-02-17 20:25:22.579274998 +0000 UTC m=+997.870973349" Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5" event={"ID":"330b8b28-b736-42a9-a430-40d75f6ec12d","Type":"ContainerStarted","Data":"4b12817aeccc3fbd566bab36c7df7b1b56d81b97f792fe362cf96eb19c5b81e4"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9pnx5" Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3260c410-ab5c-441b-a84d-7b4480d96a17","Type":"ContainerStarted","Data":"c5aa74e141cd431993667b83b19293566518563af7f146b7740e721251048fbb"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f","Type":"ContainerStarted","Data":"20fdbc866cea6a2a6de36a7251590e33b138e8d211dc0fde20fcb9d171e3a5bf"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565549 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565565 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abe64ae2-15f9-402d-984d-ea8f94bd480f","Type":"ContainerStarted","Data":"b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565577 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd","Type":"ContainerStarted","Data":"bdddf4f042d7c0b03df06117887bac94381f77c74d7b85742a695cce0331e787"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.565590 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd","Type":"ContainerStarted","Data":"4238baed5d5f66fdab0ede9b963960f1008ce5954f95e2e0d792ffb55736e199"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.569396 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hz6qr" event={"ID":"61bbd519-85d9-4466-abe0-e4c6664072c5","Type":"ContainerStarted","Data":"8c22d7e929c7b819478067b0281b1c39864ecfa05a91b242a56c201e899ad56a"} Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.591317 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=21.135750293 podStartE2EDuration="27.591294157s" podCreationTimestamp="2026-02-17 20:24:56 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.627353189 +0000 UTC m=+991.919051500" lastFinishedPulling="2026-02-17 20:25:23.082897013 +0000 UTC m=+998.374595364" observedRunningTime="2026-02-17 20:25:23.585970544 +0000 UTC m=+998.877668865" watchObservedRunningTime="2026-02-17 20:25:23.591294157 +0000 UTC m=+998.882992488" Feb 17 20:25:23 crc kubenswrapper[4793]: I0217 20:25:23.596188 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9pnx5" podStartSLOduration=19.279880962 podStartE2EDuration="24.596167538s" podCreationTimestamp="2026-02-17 20:24:59 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.236858715 +0000 UTC m=+991.528557026" lastFinishedPulling="2026-02-17 20:25:21.553145291 +0000 UTC m=+996.844843602" observedRunningTime="2026-02-17 20:25:23.566159172 +0000 UTC m=+998.857857503" watchObservedRunningTime="2026-02-17 20:25:23.596167538 +0000 UTC m=+998.887865849" Feb 17 20:25:24 crc kubenswrapper[4793]: I0217 20:25:24.581288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerStarted","Data":"dbb094fbfdb518973e6d7d1f439a56874ad55a7c23ef160ff15219bf0d004333"} Feb 17 20:25:24 crc kubenswrapper[4793]: I0217 20:25:24.584359 4793 generic.go:334] "Generic (PLEG): container finished" podID="61bbd519-85d9-4466-abe0-e4c6664072c5" containerID="8c22d7e929c7b819478067b0281b1c39864ecfa05a91b242a56c201e899ad56a" exitCode=0 Feb 17 20:25:24 crc kubenswrapper[4793]: I0217 20:25:24.584478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hz6qr" event={"ID":"61bbd519-85d9-4466-abe0-e4c6664072c5","Type":"ContainerDied","Data":"8c22d7e929c7b819478067b0281b1c39864ecfa05a91b242a56c201e899ad56a"} Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.595949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd","Type":"ContainerStarted","Data":"61605471997b70ff0a7bc79ea4cd5a613aef33ce41d672b2419b3fdd2b1f1450"} Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.599326 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hz6qr" event={"ID":"61bbd519-85d9-4466-abe0-e4c6664072c5","Type":"ContainerStarted","Data":"f6213ec5a5f257e5b16371b18a47de8fba665775cf987fe4701ed9703f84aaca"} Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.599360 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hz6qr" event={"ID":"61bbd519-85d9-4466-abe0-e4c6664072c5","Type":"ContainerStarted","Data":"6ae709859b7a37288b1b32b0594df18197810eb7c58f51acefc8c42f20ab0e83"} Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.599513 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.599558 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.602239 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3260c410-ab5c-441b-a84d-7b4480d96a17","Type":"ContainerStarted","Data":"81fe3cfd867fcfb9cb67bdf81bf184d3b2eb2c1a4701490532521cd11d5e693f"} Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.632887 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.449444476 podStartE2EDuration="26.632863297s" podCreationTimestamp="2026-02-17 20:24:59 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.490325804 +0000 UTC m=+991.782024125" lastFinishedPulling="2026-02-17 20:25:24.673744635 +0000 UTC m=+999.965442946" observedRunningTime="2026-02-17 20:25:25.622979462 +0000 UTC m=+1000.914677783" watchObservedRunningTime="2026-02-17 20:25:25.632863297 +0000 UTC m=+1000.924561618" Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.647958 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hz6qr" podStartSLOduration=21.831049426 podStartE2EDuration="26.647939092s" podCreationTimestamp="2026-02-17 20:24:59 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.734807329 +0000 UTC m=+992.026505640" lastFinishedPulling="2026-02-17 20:25:21.551696995 +0000 UTC m=+996.843395306" observedRunningTime="2026-02-17 20:25:25.644328192 +0000 UTC m=+1000.936026513" watchObservedRunningTime="2026-02-17 20:25:25.647939092 +0000 UTC m=+1000.939637413" Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.668140 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.691175053 podStartE2EDuration="23.668123814s" podCreationTimestamp="2026-02-17 20:25:02 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.696023245 +0000 UTC m=+991.987721556" lastFinishedPulling="2026-02-17 20:25:24.672972006 +0000 UTC m=+999.964670317" observedRunningTime="2026-02-17 20:25:25.661628542 +0000 UTC m=+1000.953326883" watchObservedRunningTime="2026-02-17 20:25:25.668123814 +0000 UTC m=+1000.959822145" Feb 17 20:25:25 crc kubenswrapper[4793]: I0217 20:25:25.709155 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:27 crc kubenswrapper[4793]: I0217 20:25:27.690230 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:27 crc kubenswrapper[4793]: I0217 20:25:27.708488 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:27 crc kubenswrapper[4793]: I0217 20:25:27.743803 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:27 crc kubenswrapper[4793]: I0217 20:25:27.767743 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.631449 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerID="1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe" exitCode=0 Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.631533 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" event={"ID":"ed1e6971-9bbf-42c0-8a55-7d508936e963","Type":"ContainerDied","Data":"1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe"} Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.633978 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74e2a040-552e-4736-986f-2abac7315e6a","Type":"ContainerStarted","Data":"730d2c79651f19c8eaeeb42546c4e843d9972999df7621c641d7093cb41bc2f8"} Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.634251 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.709370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.722225 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.973974 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f76cf86f-jkskm"] Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.991236 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56758db5-9hprz"] Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.992503 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:28 crc kubenswrapper[4793]: I0217 20:25:28.997820 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.005260 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56758db5-9hprz"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.164153 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7rs\" (UniqueName: \"kubernetes.io/projected/d0e86eee-e216-4934-86a4-0f87c29ad3e6-kube-api-access-9n7rs\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.164304 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-config\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.164454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-dns-svc\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.164602 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-ovsdbserver-nb\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.185251 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7lrlb"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.186499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.203010 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.221410 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7lrlb"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.222823 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.265914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n7rs\" (UniqueName: \"kubernetes.io/projected/d0e86eee-e216-4934-86a4-0f87c29ad3e6-kube-api-access-9n7rs\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.265979 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-config\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.266018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-dns-svc\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.266069 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-ovsdbserver-nb\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.267031 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-ovsdbserver-nb\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.267285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-config\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.267578 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-dns-svc\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.288703 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.290352 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.294038 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n7rs\" (UniqueName: \"kubernetes.io/projected/d0e86eee-e216-4934-86a4-0f87c29ad3e6-kube-api-access-9n7rs\") pod \"dnsmasq-dns-56758db5-9hprz\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.297202 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.297462 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.297607 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.297797 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qk6rc" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.301238 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-kc9bm"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.314603 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.351571 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f5477558c-czdv8"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.352871 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.356321 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367488 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/befba025-5e63-4459-9554-215ad72c467a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367542 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befba025-5e63-4459-9554-215ad72c467a-config\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367574 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/befba025-5e63-4459-9554-215ad72c467a-ovs-rundir\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367597 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krwll\" (UniqueName: \"kubernetes.io/projected/befba025-5e63-4459-9554-215ad72c467a-kube-api-access-krwll\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367621 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befba025-5e63-4459-9554-215ad72c467a-combined-ca-bundle\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367717 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/befba025-5e63-4459-9554-215ad72c467a-ovn-rundir\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.367789 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.392284 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f5477558c-czdv8"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469357 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxfsq\" (UniqueName: \"kubernetes.io/projected/225fdbba-d13f-4102-9134-b3f6fef0a08f-kube-api-access-lxfsq\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469628 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-dns-svc\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469652 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/225fdbba-d13f-4102-9134-b3f6fef0a08f-config\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469710 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469740 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469756 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469782 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/befba025-5e63-4459-9554-215ad72c467a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-config\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.469944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befba025-5e63-4459-9554-215ad72c467a-config\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470026 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/befba025-5e63-4459-9554-215ad72c467a-ovs-rundir\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470067 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krwll\" (UniqueName: \"kubernetes.io/projected/befba025-5e63-4459-9554-215ad72c467a-kube-api-access-krwll\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befba025-5e63-4459-9554-215ad72c467a-combined-ca-bundle\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470152 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/225fdbba-d13f-4102-9134-b3f6fef0a08f-scripts\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470250 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5b5\" (UniqueName: \"kubernetes.io/projected/475be634-cf46-4d25-922c-0908c735af65-kube-api-access-cq5b5\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/225fdbba-d13f-4102-9134-b3f6fef0a08f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470352 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/befba025-5e63-4459-9554-215ad72c467a-ovn-rundir\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/befba025-5e63-4459-9554-215ad72c467a-ovs-rundir\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.470482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/befba025-5e63-4459-9554-215ad72c467a-ovn-rundir\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.471134 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befba025-5e63-4459-9554-215ad72c467a-config\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.473282 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befba025-5e63-4459-9554-215ad72c467a-combined-ca-bundle\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.474334 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/befba025-5e63-4459-9554-215ad72c467a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.488196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krwll\" (UniqueName: \"kubernetes.io/projected/befba025-5e63-4459-9554-215ad72c467a-kube-api-access-krwll\") pod \"ovn-controller-metrics-7lrlb\" (UID: \"befba025-5e63-4459-9554-215ad72c467a\") " pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.514067 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7lrlb" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.571998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5b5\" (UniqueName: \"kubernetes.io/projected/475be634-cf46-4d25-922c-0908c735af65-kube-api-access-cq5b5\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/225fdbba-d13f-4102-9134-b3f6fef0a08f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572158 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxfsq\" (UniqueName: \"kubernetes.io/projected/225fdbba-d13f-4102-9134-b3f6fef0a08f-kube-api-access-lxfsq\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572214 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-dns-svc\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/225fdbba-d13f-4102-9134-b3f6fef0a08f-config\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572286 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572351 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572373 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-config\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.572634 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/225fdbba-d13f-4102-9134-b3f6fef0a08f-scripts\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.573069 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-dns-svc\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.573541 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/225fdbba-d13f-4102-9134-b3f6fef0a08f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.574082 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/225fdbba-d13f-4102-9134-b3f6fef0a08f-config\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.574299 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.575229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/225fdbba-d13f-4102-9134-b3f6fef0a08f-scripts\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.575479 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.577743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.577896 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-config\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.588403 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.588908 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxfsq\" (UniqueName: \"kubernetes.io/projected/225fdbba-d13f-4102-9134-b3f6fef0a08f-kube-api-access-lxfsq\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.589084 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225fdbba-d13f-4102-9134-b3f6fef0a08f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"225fdbba-d13f-4102-9134-b3f6fef0a08f\") " pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.589342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5b5\" (UniqueName: \"kubernetes.io/projected/475be634-cf46-4d25-922c-0908c735af65-kube-api-access-cq5b5\") pod \"dnsmasq-dns-5f5477558c-czdv8\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.634418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.643131 4793 generic.go:334] "Generic (PLEG): container finished" podID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" containerID="0a612b820c2f6fe3f67b037303cfd5737332d498bcc8104ef3f9ac38ee1226ee" exitCode=0 Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.643200 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" event={"ID":"c9754f47-8a2f-4878-a69e-051c0a2ec74e","Type":"ContainerDied","Data":"0a612b820c2f6fe3f67b037303cfd5737332d498bcc8104ef3f9ac38ee1226ee"} Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.658060 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" event={"ID":"ed1e6971-9bbf-42c0-8a55-7d508936e963","Type":"ContainerStarted","Data":"029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25"} Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.658582 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerName="dnsmasq-dns" containerID="cri-o://029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25" gracePeriod=10 Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.687949 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.711072 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" podStartSLOduration=3.854000392 podStartE2EDuration="40.711052887s" podCreationTimestamp="2026-02-17 20:24:49 +0000 UTC" firstStartedPulling="2026-02-17 20:24:50.840930709 +0000 UTC m=+966.132629020" lastFinishedPulling="2026-02-17 20:25:27.697983194 +0000 UTC m=+1002.989681515" observedRunningTime="2026-02-17 20:25:29.70513088 +0000 UTC m=+1004.996829191" watchObservedRunningTime="2026-02-17 20:25:29.711052887 +0000 UTC m=+1005.002751198" Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.845959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56758db5-9hprz"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.979789 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7lrlb"] Feb 17 20:25:29 crc kubenswrapper[4793]: I0217 20:25:29.982340 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:25:30 crc kubenswrapper[4793]: W0217 20:25:30.003561 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefba025_5e63_4459_9554_215ad72c467a.slice/crio-2783651d99711ec20241c435e65c48020b5da097721ed342116f3f191ec54e8d WatchSource:0}: Error finding container 2783651d99711ec20241c435e65c48020b5da097721ed342116f3f191ec54e8d: Status 404 returned error can't find the container with id 2783651d99711ec20241c435e65c48020b5da097721ed342116f3f191ec54e8d Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.081245 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-dns-svc\") pod \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.081327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-config\") pod \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.081402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7xzs\" (UniqueName: \"kubernetes.io/projected/c9754f47-8a2f-4878-a69e-051c0a2ec74e-kube-api-access-g7xzs\") pod \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\" (UID: \"c9754f47-8a2f-4878-a69e-051c0a2ec74e\") " Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.085651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9754f47-8a2f-4878-a69e-051c0a2ec74e-kube-api-access-g7xzs" (OuterVolumeSpecName: "kube-api-access-g7xzs") pod "c9754f47-8a2f-4878-a69e-051c0a2ec74e" (UID: "c9754f47-8a2f-4878-a69e-051c0a2ec74e"). InnerVolumeSpecName "kube-api-access-g7xzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.151752 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9754f47-8a2f-4878-a69e-051c0a2ec74e" (UID: "c9754f47-8a2f-4878-a69e-051c0a2ec74e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.152068 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-config" (OuterVolumeSpecName: "config") pod "c9754f47-8a2f-4878-a69e-051c0a2ec74e" (UID: "c9754f47-8a2f-4878-a69e-051c0a2ec74e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.187360 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7xzs\" (UniqueName: \"kubernetes.io/projected/c9754f47-8a2f-4878-a69e-051c0a2ec74e-kube-api-access-g7xzs\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.187390 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.187399 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9754f47-8a2f-4878-a69e-051c0a2ec74e-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.209706 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f5477558c-czdv8"] Feb 17 20:25:30 crc kubenswrapper[4793]: W0217 20:25:30.211288 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod475be634_cf46_4d25_922c_0908c735af65.slice/crio-1a2c0b78bd716143a8b038e96b123b57fc8672d62ce615ddef7f08ce229f6bdf WatchSource:0}: Error finding container 1a2c0b78bd716143a8b038e96b123b57fc8672d62ce615ddef7f08ce229f6bdf: Status 404 returned error can't find the container with id 1a2c0b78bd716143a8b038e96b123b57fc8672d62ce615ddef7f08ce229f6bdf Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.228457 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.238261 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.288145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4crp\" (UniqueName: \"kubernetes.io/projected/ed1e6971-9bbf-42c0-8a55-7d508936e963-kube-api-access-x4crp\") pod \"ed1e6971-9bbf-42c0-8a55-7d508936e963\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.288342 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-dns-svc\") pod \"ed1e6971-9bbf-42c0-8a55-7d508936e963\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.288381 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-config\") pod \"ed1e6971-9bbf-42c0-8a55-7d508936e963\" (UID: \"ed1e6971-9bbf-42c0-8a55-7d508936e963\") " Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.294179 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1e6971-9bbf-42c0-8a55-7d508936e963-kube-api-access-x4crp" (OuterVolumeSpecName: "kube-api-access-x4crp") pod "ed1e6971-9bbf-42c0-8a55-7d508936e963" (UID: "ed1e6971-9bbf-42c0-8a55-7d508936e963"). InnerVolumeSpecName "kube-api-access-x4crp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:30 crc kubenswrapper[4793]: W0217 20:25:30.317237 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod225fdbba_d13f_4102_9134_b3f6fef0a08f.slice/crio-49b1973623f377ee53146340ce47c94305b00160cf89478a5a89d8c55aae064b WatchSource:0}: Error finding container 49b1973623f377ee53146340ce47c94305b00160cf89478a5a89d8c55aae064b: Status 404 returned error can't find the container with id 49b1973623f377ee53146340ce47c94305b00160cf89478a5a89d8c55aae064b Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.321681 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.346339 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-config" (OuterVolumeSpecName: "config") pod "ed1e6971-9bbf-42c0-8a55-7d508936e963" (UID: "ed1e6971-9bbf-42c0-8a55-7d508936e963"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.349290 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed1e6971-9bbf-42c0-8a55-7d508936e963" (UID: "ed1e6971-9bbf-42c0-8a55-7d508936e963"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.390716 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.390751 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e6971-9bbf-42c0-8a55-7d508936e963-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.390763 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4crp\" (UniqueName: \"kubernetes.io/projected/ed1e6971-9bbf-42c0-8a55-7d508936e963-kube-api-access-x4crp\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.685673 4793 generic.go:334] "Generic (PLEG): container finished" podID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerID="7fd1c2644bdce3983e88ab25c33341b36a2610cc27b214989653350f8d938c4a" exitCode=0 Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.685730 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56758db5-9hprz" event={"ID":"d0e86eee-e216-4934-86a4-0f87c29ad3e6","Type":"ContainerDied","Data":"7fd1c2644bdce3983e88ab25c33341b36a2610cc27b214989653350f8d938c4a"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.685765 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56758db5-9hprz" event={"ID":"d0e86eee-e216-4934-86a4-0f87c29ad3e6","Type":"ContainerStarted","Data":"edf94af910a00bb95539c66ab77c240d11c43b1ef49fe9d2e359fe199963ccf0"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.694315 4793 generic.go:334] "Generic (PLEG): container finished" podID="475be634-cf46-4d25-922c-0908c735af65" containerID="7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b" exitCode=0 Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.694660 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" event={"ID":"475be634-cf46-4d25-922c-0908c735af65","Type":"ContainerDied","Data":"7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.694769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" event={"ID":"475be634-cf46-4d25-922c-0908c735af65","Type":"ContainerStarted","Data":"1a2c0b78bd716143a8b038e96b123b57fc8672d62ce615ddef7f08ce229f6bdf"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.720255 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"225fdbba-d13f-4102-9134-b3f6fef0a08f","Type":"ContainerStarted","Data":"49b1973623f377ee53146340ce47c94305b00160cf89478a5a89d8c55aae064b"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.744059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" event={"ID":"c9754f47-8a2f-4878-a69e-051c0a2ec74e","Type":"ContainerDied","Data":"9ac74fd7483a695fb616c4e4c9e0e3b7be945826456bb1c1d5331bf3b7b15488"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.744144 4793 scope.go:117] "RemoveContainer" containerID="0a612b820c2f6fe3f67b037303cfd5737332d498bcc8104ef3f9ac38ee1226ee" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.744290 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f49bcf4c9-kc9bm" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.767660 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7lrlb" event={"ID":"befba025-5e63-4459-9554-215ad72c467a","Type":"ContainerStarted","Data":"accd52e1bd8146b8202552b1b515267fb254d9b25088b416ef7aaa0bb1ac6454"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.767746 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7lrlb" event={"ID":"befba025-5e63-4459-9554-215ad72c467a","Type":"ContainerStarted","Data":"2783651d99711ec20241c435e65c48020b5da097721ed342116f3f191ec54e8d"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.800880 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7lrlb" podStartSLOduration=1.800858388 podStartE2EDuration="1.800858388s" podCreationTimestamp="2026-02-17 20:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:25:30.795038833 +0000 UTC m=+1006.086737144" watchObservedRunningTime="2026-02-17 20:25:30.800858388 +0000 UTC m=+1006.092556699" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.834896 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerID="029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25" exitCode=0 Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.835861 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.845398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" event={"ID":"ed1e6971-9bbf-42c0-8a55-7d508936e963","Type":"ContainerDied","Data":"029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.845443 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f76cf86f-jkskm" event={"ID":"ed1e6971-9bbf-42c0-8a55-7d508936e963","Type":"ContainerDied","Data":"aa8f664346b728d4bd30f968d08e1f6b5d9f2b1b9397f36f707acd16b3ea70ec"} Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.877953 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-kc9bm"] Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.892514 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f49bcf4c9-kc9bm"] Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.900342 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f76cf86f-jkskm"] Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.911932 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66f76cf86f-jkskm"] Feb 17 20:25:30 crc kubenswrapper[4793]: I0217 20:25:30.954645 4793 scope.go:117] "RemoveContainer" containerID="029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.086647 4793 scope.go:117] "RemoveContainer" containerID="1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.107649 4793 scope.go:117] "RemoveContainer" containerID="029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25" Feb 17 20:25:31 crc kubenswrapper[4793]: E0217 20:25:31.108049 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25\": container with ID starting with 029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25 not found: ID does not exist" containerID="029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.108093 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25"} err="failed to get container status \"029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25\": rpc error: code = NotFound desc = could not find container \"029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25\": container with ID starting with 029d5f41344d4e1f65dd8f4dd2a57c382a45211a3dc341b51b5f2fd8da530e25 not found: ID does not exist" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.108121 4793 scope.go:117] "RemoveContainer" containerID="1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe" Feb 17 20:25:31 crc kubenswrapper[4793]: E0217 20:25:31.108997 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe\": container with ID starting with 1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe not found: ID does not exist" containerID="1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.109049 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe"} err="failed to get container status \"1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe\": rpc error: code = NotFound desc = could not find container \"1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe\": container with ID starting with 1dabfa5589a149b94e36a79b52f10550218ddb7d267f4f94f458a828a7ac0cfe not found: ID does not exist" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.556281 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" path="/var/lib/kubelet/pods/c9754f47-8a2f-4878-a69e-051c0a2ec74e/volumes" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.558056 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" path="/var/lib/kubelet/pods/ed1e6971-9bbf-42c0-8a55-7d508936e963/volumes" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.850502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56758db5-9hprz" event={"ID":"d0e86eee-e216-4934-86a4-0f87c29ad3e6","Type":"ContainerStarted","Data":"9158ff5582ff2b46b0058366dfd277e31c221d70ca84c46b11c71a800bf2bfe7"} Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.850659 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.856234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" event={"ID":"475be634-cf46-4d25-922c-0908c735af65","Type":"ContainerStarted","Data":"4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d"} Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.856417 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.867416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"225fdbba-d13f-4102-9134-b3f6fef0a08f","Type":"ContainerStarted","Data":"0a7df7ddf0ed8265970a0f21056a97975f1fc07c2d44fdd240e5ad3b730bdf66"} Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.867472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"225fdbba-d13f-4102-9134-b3f6fef0a08f","Type":"ContainerStarted","Data":"5b5ade573e528fe8b23976b5574c61f5bd721fab95b318ceb968b1320d3b9bd3"} Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.868713 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.895916 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56758db5-9hprz" podStartSLOduration=3.8958832389999998 podStartE2EDuration="3.895883239s" podCreationTimestamp="2026-02-17 20:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:25:31.885209813 +0000 UTC m=+1007.176908134" watchObservedRunningTime="2026-02-17 20:25:31.895883239 +0000 UTC m=+1007.187581600" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.932100 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" podStartSLOduration=2.932080398 podStartE2EDuration="2.932080398s" podCreationTimestamp="2026-02-17 20:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:25:31.922649704 +0000 UTC m=+1007.214348085" watchObservedRunningTime="2026-02-17 20:25:31.932080398 +0000 UTC m=+1007.223778719" Feb 17 20:25:31 crc kubenswrapper[4793]: I0217 20:25:31.944387 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.264731246 podStartE2EDuration="2.944368304s" podCreationTimestamp="2026-02-17 20:25:29 +0000 UTC" firstStartedPulling="2026-02-17 20:25:30.321037785 +0000 UTC m=+1005.612736096" lastFinishedPulling="2026-02-17 20:25:31.000674833 +0000 UTC m=+1006.292373154" observedRunningTime="2026-02-17 20:25:31.942970649 +0000 UTC m=+1007.234668980" watchObservedRunningTime="2026-02-17 20:25:31.944368304 +0000 UTC m=+1007.236066625" Feb 17 20:25:32 crc kubenswrapper[4793]: I0217 20:25:32.889748 4793 generic.go:334] "Generic (PLEG): container finished" podID="3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd" containerID="bdddf4f042d7c0b03df06117887bac94381f77c74d7b85742a695cce0331e787" exitCode=0 Feb 17 20:25:32 crc kubenswrapper[4793]: I0217 20:25:32.889847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd","Type":"ContainerDied","Data":"bdddf4f042d7c0b03df06117887bac94381f77c74d7b85742a695cce0331e787"} Feb 17 20:25:32 crc kubenswrapper[4793]: I0217 20:25:32.891855 4793 generic.go:334] "Generic (PLEG): container finished" podID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerID="dbb094fbfdb518973e6d7d1f439a56874ad55a7c23ef160ff15219bf0d004333" exitCode=0 Feb 17 20:25:32 crc kubenswrapper[4793]: I0217 20:25:32.891929 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerDied","Data":"dbb094fbfdb518973e6d7d1f439a56874ad55a7c23ef160ff15219bf0d004333"} Feb 17 20:25:32 crc kubenswrapper[4793]: I0217 20:25:32.893645 4793 generic.go:334] "Generic (PLEG): container finished" podID="3e7eee19-fd63-4ae8-96d9-9fcd17718b6f" containerID="20fdbc866cea6a2a6de36a7251590e33b138e8d211dc0fde20fcb9d171e3a5bf" exitCode=0 Feb 17 20:25:32 crc kubenswrapper[4793]: I0217 20:25:32.893846 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f","Type":"ContainerDied","Data":"20fdbc866cea6a2a6de36a7251590e33b138e8d211dc0fde20fcb9d171e3a5bf"} Feb 17 20:25:33 crc kubenswrapper[4793]: I0217 20:25:33.905816 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3e7eee19-fd63-4ae8-96d9-9fcd17718b6f","Type":"ContainerStarted","Data":"2bca36f9a1dd56c9b79f83e12f3ded5585d07b8e0bb537734664d18b262cb1c2"} Feb 17 20:25:33 crc kubenswrapper[4793]: I0217 20:25:33.909981 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd","Type":"ContainerStarted","Data":"2383c37dedd81f231aed5cbb790f062384aeb180ea0329674ce10733d807c798"} Feb 17 20:25:33 crc kubenswrapper[4793]: I0217 20:25:33.938569 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=37.605462424 podStartE2EDuration="42.938541497s" podCreationTimestamp="2026-02-17 20:24:51 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.219095444 +0000 UTC m=+991.510793755" lastFinishedPulling="2026-02-17 20:25:21.552174517 +0000 UTC m=+996.843872828" observedRunningTime="2026-02-17 20:25:33.936450425 +0000 UTC m=+1009.228148756" watchObservedRunningTime="2026-02-17 20:25:33.938541497 +0000 UTC m=+1009.230239838" Feb 17 20:25:33 crc kubenswrapper[4793]: I0217 20:25:33.970083 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=36.900748132 podStartE2EDuration="41.97006365s" podCreationTimestamp="2026-02-17 20:24:52 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.483612887 +0000 UTC m=+991.775311198" lastFinishedPulling="2026-02-17 20:25:21.552928405 +0000 UTC m=+996.844626716" observedRunningTime="2026-02-17 20:25:33.962980904 +0000 UTC m=+1009.254679225" watchObservedRunningTime="2026-02-17 20:25:33.97006365 +0000 UTC m=+1009.261761971" Feb 17 20:25:34 crc kubenswrapper[4793]: I0217 20:25:34.061113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 20:25:34 crc kubenswrapper[4793]: I0217 20:25:34.061161 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 20:25:35 crc kubenswrapper[4793]: E0217 20:25:35.350071 4793 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.155:55634->38.102.83.155:44575: write tcp 38.102.83.155:55634->38.102.83.155:44575: write: connection reset by peer Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.601294 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.622187 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56758db5-9hprz"] Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.622464 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56758db5-9hprz" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerName="dnsmasq-dns" containerID="cri-o://9158ff5582ff2b46b0058366dfd277e31c221d70ca84c46b11c71a800bf2bfe7" gracePeriod=10 Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.626837 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.652702 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549c9b7879-jznsj"] Feb 17 20:25:36 crc kubenswrapper[4793]: E0217 20:25:36.653247 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerName="dnsmasq-dns" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.653347 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerName="dnsmasq-dns" Feb 17 20:25:36 crc kubenswrapper[4793]: E0217 20:25:36.653464 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" containerName="init" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.653539 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" containerName="init" Feb 17 20:25:36 crc kubenswrapper[4793]: E0217 20:25:36.653623 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerName="init" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.653726 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerName="init" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.653982 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9754f47-8a2f-4878-a69e-051c0a2ec74e" containerName="init" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.654080 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1e6971-9bbf-42c0-8a55-7d508936e963" containerName="dnsmasq-dns" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.660088 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.686947 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549c9b7879-jznsj"] Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.826555 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-config\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.826613 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-dns-svc\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.826716 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-nb\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.826780 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vth64\" (UniqueName: \"kubernetes.io/projected/fc67b702-9dc7-4333-94f4-df82b696021d-kube-api-access-vth64\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.826827 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-sb\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.927927 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vth64\" (UniqueName: \"kubernetes.io/projected/fc67b702-9dc7-4333-94f4-df82b696021d-kube-api-access-vth64\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.927989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-sb\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.928024 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-config\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.928051 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-dns-svc\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.928103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-nb\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.928897 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-nb\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.929837 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-sb\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.930601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-config\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.931109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-dns-svc\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.939822 4793 generic.go:334] "Generic (PLEG): container finished" podID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerID="9158ff5582ff2b46b0058366dfd277e31c221d70ca84c46b11c71a800bf2bfe7" exitCode=0 Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.939860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56758db5-9hprz" event={"ID":"d0e86eee-e216-4934-86a4-0f87c29ad3e6","Type":"ContainerDied","Data":"9158ff5582ff2b46b0058366dfd277e31c221d70ca84c46b11c71a800bf2bfe7"} Feb 17 20:25:36 crc kubenswrapper[4793]: I0217 20:25:36.957966 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vth64\" (UniqueName: \"kubernetes.io/projected/fc67b702-9dc7-4333-94f4-df82b696021d-kube-api-access-vth64\") pod \"dnsmasq-dns-549c9b7879-jznsj\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.028022 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.756906 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.761545 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.763563 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.763953 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.764005 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.764072 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b8b7q" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.799515 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.849408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.849497 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1695ca3-290a-44c5-8771-146029a6054a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.849533 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1695ca3-290a-44c5-8771-146029a6054a-cache\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.849581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.849599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b1695ca3-290a-44c5-8771-146029a6054a-lock\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.849657 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgf4q\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-kube-api-access-jgf4q\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.951253 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1695ca3-290a-44c5-8771-146029a6054a-cache\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.951598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.951617 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b1695ca3-290a-44c5-8771-146029a6054a-lock\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.951864 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.951986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgf4q\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-kube-api-access-jgf4q\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.952036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.952090 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1695ca3-290a-44c5-8771-146029a6054a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.952188 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b1695ca3-290a-44c5-8771-146029a6054a-lock\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.952242 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b1695ca3-290a-44c5-8771-146029a6054a-cache\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: E0217 20:25:37.952298 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 20:25:37 crc kubenswrapper[4793]: E0217 20:25:37.952332 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 20:25:37 crc kubenswrapper[4793]: E0217 20:25:37.952458 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift podName:b1695ca3-290a-44c5-8771-146029a6054a nodeName:}" failed. No retries permitted until 2026-02-17 20:25:38.452410108 +0000 UTC m=+1013.744108409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift") pod "swift-storage-0" (UID: "b1695ca3-290a-44c5-8771-146029a6054a") : configmap "swift-ring-files" not found Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.957344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1695ca3-290a-44c5-8771-146029a6054a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.971211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgf4q\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-kube-api-access-jgf4q\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:37 crc kubenswrapper[4793]: I0217 20:25:37.977623 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.188931 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-tk4bk"] Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.190335 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.192095 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.192729 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.195672 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.201488 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tk4bk"] Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.240420 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bvw9p"] Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.243167 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.267777 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.286200 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-tk4bk"] Feb 17 20:25:38 crc kubenswrapper[4793]: E0217 20:25:38.286973 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-6ft8j ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-6ft8j ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-tk4bk" podUID="9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.297910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bvw9p"] Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357122 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3d5841af-e328-4ea6-a184-546676cce0a7-etc-swift\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-scripts\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-dispersionconf\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-etc-swift\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-combined-ca-bundle\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357528 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-dispersionconf\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357588 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjdt\" (UniqueName: \"kubernetes.io/projected/3d5841af-e328-4ea6-a184-546676cce0a7-kube-api-access-jbjdt\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-combined-ca-bundle\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.357925 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-swiftconf\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.358070 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-ring-data-devices\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.358137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-ring-data-devices\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.358167 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ft8j\" (UniqueName: \"kubernetes.io/projected/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-kube-api-access-6ft8j\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.358231 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-swiftconf\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.358307 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-scripts\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.404591 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-scripts\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3d5841af-e328-4ea6-a184-546676cce0a7-etc-swift\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-scripts\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460835 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-dispersionconf\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460870 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-etc-swift\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-combined-ca-bundle\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460926 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-dispersionconf\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjdt\" (UniqueName: \"kubernetes.io/projected/3d5841af-e328-4ea6-a184-546676cce0a7-kube-api-access-jbjdt\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.460977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-combined-ca-bundle\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461020 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-swiftconf\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-ring-data-devices\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461116 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-ring-data-devices\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461163 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ft8j\" (UniqueName: \"kubernetes.io/projected/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-kube-api-access-6ft8j\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461201 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-swiftconf\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461272 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3d5841af-e328-4ea6-a184-546676cce0a7-etc-swift\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.461753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-scripts\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: E0217 20:25:38.461880 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 20:25:38 crc kubenswrapper[4793]: E0217 20:25:38.461902 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 20:25:38 crc kubenswrapper[4793]: E0217 20:25:38.461957 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift podName:b1695ca3-290a-44c5-8771-146029a6054a nodeName:}" failed. No retries permitted until 2026-02-17 20:25:39.461937889 +0000 UTC m=+1014.753636310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift") pod "swift-storage-0" (UID: "b1695ca3-290a-44c5-8771-146029a6054a") : configmap "swift-ring-files" not found Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.462170 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-ring-data-devices\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.463048 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-ring-data-devices\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.464105 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-etc-swift\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.464317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-scripts\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.470716 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-combined-ca-bundle\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.475443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-dispersionconf\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.478059 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-dispersionconf\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.479301 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-swiftconf\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.493288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-swiftconf\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.497548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-combined-ca-bundle\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.497757 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ft8j\" (UniqueName: \"kubernetes.io/projected/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-kube-api-access-6ft8j\") pod \"swift-ring-rebalance-tk4bk\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.530408 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjdt\" (UniqueName: \"kubernetes.io/projected/3d5841af-e328-4ea6-a184-546676cce0a7-kube-api-access-jbjdt\") pod \"swift-ring-rebalance-bvw9p\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.568085 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:38 crc kubenswrapper[4793]: I0217 20:25:38.957910 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.039355 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173302 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-dispersionconf\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173373 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-combined-ca-bundle\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173417 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-etc-swift\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173475 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-ring-data-devices\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173510 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ft8j\" (UniqueName: \"kubernetes.io/projected/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-kube-api-access-6ft8j\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-swiftconf\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.173597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-scripts\") pod \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\" (UID: \"9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.174279 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.174935 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-scripts" (OuterVolumeSpecName: "scripts") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.175400 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.179017 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-kube-api-access-6ft8j" (OuterVolumeSpecName: "kube-api-access-6ft8j") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "kube-api-access-6ft8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.179423 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.182786 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.185220 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" (UID: "9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.255213 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275254 4793 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275293 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275308 4793 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275321 4793 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275331 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ft8j\" (UniqueName: \"kubernetes.io/projected/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-kube-api-access-6ft8j\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275340 4793 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.275348 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.276335 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549c9b7879-jznsj"] Feb 17 20:25:39 crc kubenswrapper[4793]: W0217 20:25:39.278260 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc67b702_9dc7_4333_94f4_df82b696021d.slice/crio-fee2df938a1bf4d5e8c084bdd81308358f475326afc6e1c781157f8c50bf0db0 WatchSource:0}: Error finding container fee2df938a1bf4d5e8c084bdd81308358f475326afc6e1c781157f8c50bf0db0: Status 404 returned error can't find the container with id fee2df938a1bf4d5e8c084bdd81308358f475326afc6e1c781157f8c50bf0db0 Feb 17 20:25:39 crc kubenswrapper[4793]: W0217 20:25:39.280145 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d5841af_e328_4ea6_a184_546676cce0a7.slice/crio-b6749344053c36734687feb2613a91e178e9284b060d3301b2968900cf2473c8 WatchSource:0}: Error finding container b6749344053c36734687feb2613a91e178e9284b060d3301b2968900cf2473c8: Status 404 returned error can't find the container with id b6749344053c36734687feb2613a91e178e9284b060d3301b2968900cf2473c8 Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.295877 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bvw9p"] Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.376165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-config\") pod \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.376720 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-ovsdbserver-nb\") pod \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.376791 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n7rs\" (UniqueName: \"kubernetes.io/projected/d0e86eee-e216-4934-86a4-0f87c29ad3e6-kube-api-access-9n7rs\") pod \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.376854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-dns-svc\") pod \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\" (UID: \"d0e86eee-e216-4934-86a4-0f87c29ad3e6\") " Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.380506 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e86eee-e216-4934-86a4-0f87c29ad3e6-kube-api-access-9n7rs" (OuterVolumeSpecName: "kube-api-access-9n7rs") pod "d0e86eee-e216-4934-86a4-0f87c29ad3e6" (UID: "d0e86eee-e216-4934-86a4-0f87c29ad3e6"). InnerVolumeSpecName "kube-api-access-9n7rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.456608 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0e86eee-e216-4934-86a4-0f87c29ad3e6" (UID: "d0e86eee-e216-4934-86a4-0f87c29ad3e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.460214 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0e86eee-e216-4934-86a4-0f87c29ad3e6" (UID: "d0e86eee-e216-4934-86a4-0f87c29ad3e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.463540 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-config" (OuterVolumeSpecName: "config") pod "d0e86eee-e216-4934-86a4-0f87c29ad3e6" (UID: "d0e86eee-e216-4934-86a4-0f87c29ad3e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.478669 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:39 crc kubenswrapper[4793]: E0217 20:25:39.478822 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 20:25:39 crc kubenswrapper[4793]: E0217 20:25:39.479168 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.479223 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n7rs\" (UniqueName: \"kubernetes.io/projected/d0e86eee-e216-4934-86a4-0f87c29ad3e6-kube-api-access-9n7rs\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: E0217 20:25:39.479251 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift podName:b1695ca3-290a-44c5-8771-146029a6054a nodeName:}" failed. No retries permitted until 2026-02-17 20:25:41.479224538 +0000 UTC m=+1016.770922839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift") pod "swift-storage-0" (UID: "b1695ca3-290a-44c5-8771-146029a6054a") : configmap "swift-ring-files" not found Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.479285 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.479305 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.479317 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0e86eee-e216-4934-86a4-0f87c29ad3e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:39 crc kubenswrapper[4793]: E0217 20:25:39.577043 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc67b702_9dc7_4333_94f4_df82b696021d.slice/crio-conmon-8a5d4be5dbed970881e3100d84aae684886c4f8adcb6fc3dd563152bf6737c83.scope\": RecentStats: unable to find data in memory cache]" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.690514 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.990530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerStarted","Data":"08ff0128703ca9b6446666f3b9419eda34b25209e7662a1d489bf3053f971e40"} Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.992612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bvw9p" event={"ID":"3d5841af-e328-4ea6-a184-546676cce0a7","Type":"ContainerStarted","Data":"b6749344053c36734687feb2613a91e178e9284b060d3301b2968900cf2473c8"} Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.994500 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56758db5-9hprz" event={"ID":"d0e86eee-e216-4934-86a4-0f87c29ad3e6","Type":"ContainerDied","Data":"edf94af910a00bb95539c66ab77c240d11c43b1ef49fe9d2e359fe199963ccf0"} Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.994558 4793 scope.go:117] "RemoveContainer" containerID="9158ff5582ff2b46b0058366dfd277e31c221d70ca84c46b11c71a800bf2bfe7" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.994740 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56758db5-9hprz" Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.999661 4793 generic.go:334] "Generic (PLEG): container finished" podID="fc67b702-9dc7-4333-94f4-df82b696021d" containerID="8a5d4be5dbed970881e3100d84aae684886c4f8adcb6fc3dd563152bf6737c83" exitCode=0 Feb 17 20:25:39 crc kubenswrapper[4793]: I0217 20:25:39.999764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tk4bk" Feb 17 20:25:40 crc kubenswrapper[4793]: I0217 20:25:39.999931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" event={"ID":"fc67b702-9dc7-4333-94f4-df82b696021d","Type":"ContainerDied","Data":"8a5d4be5dbed970881e3100d84aae684886c4f8adcb6fc3dd563152bf6737c83"} Feb 17 20:25:40 crc kubenswrapper[4793]: I0217 20:25:39.999975 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" event={"ID":"fc67b702-9dc7-4333-94f4-df82b696021d","Type":"ContainerStarted","Data":"fee2df938a1bf4d5e8c084bdd81308358f475326afc6e1c781157f8c50bf0db0"} Feb 17 20:25:40 crc kubenswrapper[4793]: I0217 20:25:40.076390 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-tk4bk"] Feb 17 20:25:40 crc kubenswrapper[4793]: I0217 20:25:40.086266 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-tk4bk"] Feb 17 20:25:40 crc kubenswrapper[4793]: I0217 20:25:40.093168 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56758db5-9hprz"] Feb 17 20:25:40 crc kubenswrapper[4793]: I0217 20:25:40.100822 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56758db5-9hprz"] Feb 17 20:25:41 crc kubenswrapper[4793]: I0217 20:25:41.517843 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:41 crc kubenswrapper[4793]: E0217 20:25:41.518031 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 20:25:41 crc kubenswrapper[4793]: E0217 20:25:41.518298 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 20:25:41 crc kubenswrapper[4793]: E0217 20:25:41.518361 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift podName:b1695ca3-290a-44c5-8771-146029a6054a nodeName:}" failed. No retries permitted until 2026-02-17 20:25:45.518341108 +0000 UTC m=+1020.810039429 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift") pod "swift-storage-0" (UID: "b1695ca3-290a-44c5-8771-146029a6054a") : configmap "swift-ring-files" not found Feb 17 20:25:41 crc kubenswrapper[4793]: I0217 20:25:41.550729 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6" path="/var/lib/kubelet/pods/9d3e7bd7-38fd-4874-b3e6-99c6f15a55e6/volumes" Feb 17 20:25:41 crc kubenswrapper[4793]: I0217 20:25:41.551366 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" path="/var/lib/kubelet/pods/d0e86eee-e216-4934-86a4-0f87c29ad3e6/volumes" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.733802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.734173 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.793832 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dl2rg"] Feb 17 20:25:42 crc kubenswrapper[4793]: E0217 20:25:42.794156 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerName="dnsmasq-dns" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.794172 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerName="dnsmasq-dns" Feb 17 20:25:42 crc kubenswrapper[4793]: E0217 20:25:42.794195 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerName="init" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.794203 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerName="init" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.794375 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e86eee-e216-4934-86a4-0f87c29ad3e6" containerName="dnsmasq-dns" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.794923 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.798568 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.805497 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dl2rg"] Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.868262 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.944060 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqs9\" (UniqueName: \"kubernetes.io/projected/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-kube-api-access-wlqs9\") pod \"root-account-create-update-dl2rg\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:42 crc kubenswrapper[4793]: I0217 20:25:42.944195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-operator-scripts\") pod \"root-account-create-update-dl2rg\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:43 crc kubenswrapper[4793]: I0217 20:25:43.045831 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlqs9\" (UniqueName: \"kubernetes.io/projected/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-kube-api-access-wlqs9\") pod \"root-account-create-update-dl2rg\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:43 crc kubenswrapper[4793]: I0217 20:25:43.045997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-operator-scripts\") pod \"root-account-create-update-dl2rg\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:43 crc kubenswrapper[4793]: I0217 20:25:43.046778 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-operator-scripts\") pod \"root-account-create-update-dl2rg\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:43 crc kubenswrapper[4793]: I0217 20:25:43.064195 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlqs9\" (UniqueName: \"kubernetes.io/projected/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-kube-api-access-wlqs9\") pod \"root-account-create-update-dl2rg\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:43 crc kubenswrapper[4793]: I0217 20:25:43.119721 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:43 crc kubenswrapper[4793]: I0217 20:25:43.179860 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.106158 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jkj8g"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.109025 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.120977 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jkj8g"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.202057 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f641fd-f904-45da-9da3-6e0d1545fc8c-operator-scripts\") pod \"keystone-db-create-jkj8g\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.202378 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjzgp\" (UniqueName: \"kubernetes.io/projected/a9f641fd-f904-45da-9da3-6e0d1545fc8c-kube-api-access-gjzgp\") pod \"keystone-db-create-jkj8g\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.206453 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0ff3-account-create-update-bnxqh"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.207813 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.212743 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.219799 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ff3-account-create-update-bnxqh"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.303605 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-operator-scripts\") pod \"keystone-0ff3-account-create-update-bnxqh\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.303708 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cf55\" (UniqueName: \"kubernetes.io/projected/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-kube-api-access-6cf55\") pod \"keystone-0ff3-account-create-update-bnxqh\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.303787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f641fd-f904-45da-9da3-6e0d1545fc8c-operator-scripts\") pod \"keystone-db-create-jkj8g\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.303841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjzgp\" (UniqueName: \"kubernetes.io/projected/a9f641fd-f904-45da-9da3-6e0d1545fc8c-kube-api-access-gjzgp\") pod \"keystone-db-create-jkj8g\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.304864 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f641fd-f904-45da-9da3-6e0d1545fc8c-operator-scripts\") pod \"keystone-db-create-jkj8g\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.325327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjzgp\" (UniqueName: \"kubernetes.io/projected/a9f641fd-f904-45da-9da3-6e0d1545fc8c-kube-api-access-gjzgp\") pod \"keystone-db-create-jkj8g\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.400474 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pc6kf"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.402166 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.405737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-operator-scripts\") pod \"keystone-0ff3-account-create-update-bnxqh\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.405792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cf55\" (UniqueName: \"kubernetes.io/projected/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-kube-api-access-6cf55\") pod \"keystone-0ff3-account-create-update-bnxqh\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.406813 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-operator-scripts\") pod \"keystone-0ff3-account-create-update-bnxqh\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.409050 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pc6kf"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.430933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cf55\" (UniqueName: \"kubernetes.io/projected/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-kube-api-access-6cf55\") pod \"keystone-0ff3-account-create-update-bnxqh\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.437265 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.507517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47nn\" (UniqueName: \"kubernetes.io/projected/4adfee96-44d3-49a6-a577-222950b89117-kube-api-access-w47nn\") pod \"placement-db-create-pc6kf\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.507611 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adfee96-44d3-49a6-a577-222950b89117-operator-scripts\") pod \"placement-db-create-pc6kf\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.523397 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.524312 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-eac5-account-create-update-gmxcc"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.525327 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.530819 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.536309 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-eac5-account-create-update-gmxcc"] Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.609205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5c72e3-f686-46fa-ac5d-684106630bd6-operator-scripts\") pod \"placement-eac5-account-create-update-gmxcc\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.609313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv9sn\" (UniqueName: \"kubernetes.io/projected/4c5c72e3-f686-46fa-ac5d-684106630bd6-kube-api-access-qv9sn\") pod \"placement-eac5-account-create-update-gmxcc\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.609356 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.609388 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w47nn\" (UniqueName: \"kubernetes.io/projected/4adfee96-44d3-49a6-a577-222950b89117-kube-api-access-w47nn\") pod \"placement-db-create-pc6kf\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: E0217 20:25:45.609805 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 20:25:45 crc kubenswrapper[4793]: E0217 20:25:45.609833 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 20:25:45 crc kubenswrapper[4793]: E0217 20:25:45.609886 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift podName:b1695ca3-290a-44c5-8771-146029a6054a nodeName:}" failed. No retries permitted until 2026-02-17 20:25:53.609866209 +0000 UTC m=+1028.901564580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift") pod "swift-storage-0" (UID: "b1695ca3-290a-44c5-8771-146029a6054a") : configmap "swift-ring-files" not found Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.610424 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adfee96-44d3-49a6-a577-222950b89117-operator-scripts\") pod \"placement-db-create-pc6kf\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.611727 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adfee96-44d3-49a6-a577-222950b89117-operator-scripts\") pod \"placement-db-create-pc6kf\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.631026 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w47nn\" (UniqueName: \"kubernetes.io/projected/4adfee96-44d3-49a6-a577-222950b89117-kube-api-access-w47nn\") pod \"placement-db-create-pc6kf\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.714241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv9sn\" (UniqueName: \"kubernetes.io/projected/4c5c72e3-f686-46fa-ac5d-684106630bd6-kube-api-access-qv9sn\") pod \"placement-eac5-account-create-update-gmxcc\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.714436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5c72e3-f686-46fa-ac5d-684106630bd6-operator-scripts\") pod \"placement-eac5-account-create-update-gmxcc\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.715233 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5c72e3-f686-46fa-ac5d-684106630bd6-operator-scripts\") pod \"placement-eac5-account-create-update-gmxcc\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.722170 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.733379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv9sn\" (UniqueName: \"kubernetes.io/projected/4c5c72e3-f686-46fa-ac5d-684106630bd6-kube-api-access-qv9sn\") pod \"placement-eac5-account-create-update-gmxcc\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:45 crc kubenswrapper[4793]: I0217 20:25:45.908159 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.559926 4793 scope.go:117] "RemoveContainer" containerID="7fd1c2644bdce3983e88ab25c33341b36a2610cc27b214989653350f8d938c4a" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.680646 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-msx9t"] Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.681816 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.688320 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-msx9t"] Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.787282 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-a574-account-create-update-vdv59"] Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.788388 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.795938 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.804805 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a574-account-create-update-vdv59"] Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.835030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7jt\" (UniqueName: \"kubernetes.io/projected/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-kube-api-access-td7jt\") pod \"watcher-db-create-msx9t\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.835276 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-operator-scripts\") pod \"watcher-db-create-msx9t\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.938480 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-operator-scripts\") pod \"watcher-db-create-msx9t\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.938829 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f7d935-51e8-4a7d-90fe-1248b48d8361-operator-scripts\") pod \"watcher-a574-account-create-update-vdv59\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.938909 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nwr\" (UniqueName: \"kubernetes.io/projected/21f7d935-51e8-4a7d-90fe-1248b48d8361-kube-api-access-88nwr\") pod \"watcher-a574-account-create-update-vdv59\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.939004 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td7jt\" (UniqueName: \"kubernetes.io/projected/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-kube-api-access-td7jt\") pod \"watcher-db-create-msx9t\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.939488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-operator-scripts\") pod \"watcher-db-create-msx9t\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:46 crc kubenswrapper[4793]: I0217 20:25:46.964258 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td7jt\" (UniqueName: \"kubernetes.io/projected/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-kube-api-access-td7jt\") pod \"watcher-db-create-msx9t\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.040248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f7d935-51e8-4a7d-90fe-1248b48d8361-operator-scripts\") pod \"watcher-a574-account-create-update-vdv59\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.040321 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88nwr\" (UniqueName: \"kubernetes.io/projected/21f7d935-51e8-4a7d-90fe-1248b48d8361-kube-api-access-88nwr\") pod \"watcher-a574-account-create-update-vdv59\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.040925 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f7d935-51e8-4a7d-90fe-1248b48d8361-operator-scripts\") pod \"watcher-a574-account-create-update-vdv59\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.062876 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88nwr\" (UniqueName: \"kubernetes.io/projected/21f7d935-51e8-4a7d-90fe-1248b48d8361-kube-api-access-88nwr\") pod \"watcher-a574-account-create-update-vdv59\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.074723 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerStarted","Data":"dd47aea6262d47e964c05121ff5cb519a9f88297bbcd330c67374c6c404f8f4a"} Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.120301 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.137943 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.213942 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-eac5-account-create-update-gmxcc"] Feb 17 20:25:47 crc kubenswrapper[4793]: W0217 20:25:47.223594 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c5c72e3_f686_46fa_ac5d_684106630bd6.slice/crio-62a95516d2144d153020dda08edc830b74d4d1b58461a1f6ad5e768ba6ecda1c WatchSource:0}: Error finding container 62a95516d2144d153020dda08edc830b74d4d1b58461a1f6ad5e768ba6ecda1c: Status 404 returned error can't find the container with id 62a95516d2144d153020dda08edc830b74d4d1b58461a1f6ad5e768ba6ecda1c Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.266051 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pc6kf"] Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.357204 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jkj8g"] Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.369518 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ff3-account-create-update-bnxqh"] Feb 17 20:25:47 crc kubenswrapper[4793]: W0217 20:25:47.377407 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9f641fd_f904_45da_9da3_6e0d1545fc8c.slice/crio-5efe482e172b3e69c7302fe49cee3847085b75f4a811914bf732bdf98c3b98f3 WatchSource:0}: Error finding container 5efe482e172b3e69c7302fe49cee3847085b75f4a811914bf732bdf98c3b98f3: Status 404 returned error can't find the container with id 5efe482e172b3e69c7302fe49cee3847085b75f4a811914bf732bdf98c3b98f3 Feb 17 20:25:47 crc kubenswrapper[4793]: W0217 20:25:47.382388 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1332e2d_624b_4d26_a53e_6b62e5bb3a86.slice/crio-f7a7deea0c1f220ed941b10d2fc02054459f04281c297a02f44db174f31daf1a WatchSource:0}: Error finding container f7a7deea0c1f220ed941b10d2fc02054459f04281c297a02f44db174f31daf1a: Status 404 returned error can't find the container with id f7a7deea0c1f220ed941b10d2fc02054459f04281c297a02f44db174f31daf1a Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.384374 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dl2rg"] Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.393198 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.393723 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.585882 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-msx9t"] Feb 17 20:25:47 crc kubenswrapper[4793]: W0217 20:25:47.597157 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ca7f4a_8678_4fb0_ac5b_726a4cf6a674.slice/crio-87962386b93b817f8f30c18e951ee781e1c71a7829a4953a2619a7395b33a41e WatchSource:0}: Error finding container 87962386b93b817f8f30c18e951ee781e1c71a7829a4953a2619a7395b33a41e: Status 404 returned error can't find the container with id 87962386b93b817f8f30c18e951ee781e1c71a7829a4953a2619a7395b33a41e Feb 17 20:25:47 crc kubenswrapper[4793]: I0217 20:25:47.707808 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-a574-account-create-update-vdv59"] Feb 17 20:25:47 crc kubenswrapper[4793]: W0217 20:25:47.713505 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21f7d935_51e8_4a7d_90fe_1248b48d8361.slice/crio-2c40615067b15fabb1ddca8e0cc59a5ca539e9c95dc321dc3f32f0363d2bb295 WatchSource:0}: Error finding container 2c40615067b15fabb1ddca8e0cc59a5ca539e9c95dc321dc3f32f0363d2bb295: Status 404 returned error can't find the container with id 2c40615067b15fabb1ddca8e0cc59a5ca539e9c95dc321dc3f32f0363d2bb295 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.082416 4793 generic.go:334] "Generic (PLEG): container finished" podID="4c5c72e3-f686-46fa-ac5d-684106630bd6" containerID="187f81c72c174f486c5fba46522fd0de1b70c24afcae9b38970e55eb1f4c2593" exitCode=0 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.082801 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-eac5-account-create-update-gmxcc" event={"ID":"4c5c72e3-f686-46fa-ac5d-684106630bd6","Type":"ContainerDied","Data":"187f81c72c174f486c5fba46522fd0de1b70c24afcae9b38970e55eb1f4c2593"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.082830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-eac5-account-create-update-gmxcc" event={"ID":"4c5c72e3-f686-46fa-ac5d-684106630bd6","Type":"ContainerStarted","Data":"62a95516d2144d153020dda08edc830b74d4d1b58461a1f6ad5e768ba6ecda1c"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.083920 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a574-account-create-update-vdv59" event={"ID":"21f7d935-51e8-4a7d-90fe-1248b48d8361","Type":"ContainerStarted","Data":"2c40615067b15fabb1ddca8e0cc59a5ca539e9c95dc321dc3f32f0363d2bb295"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.085787 4793 generic.go:334] "Generic (PLEG): container finished" podID="b1332e2d-624b-4d26-a53e-6b62e5bb3a86" containerID="4d70da5e59c8508cfad293da6b5646c8fb31eb7c3b57da54eb5e5465e4c130d5" exitCode=0 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.085828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ff3-account-create-update-bnxqh" event={"ID":"b1332e2d-624b-4d26-a53e-6b62e5bb3a86","Type":"ContainerDied","Data":"4d70da5e59c8508cfad293da6b5646c8fb31eb7c3b57da54eb5e5465e4c130d5"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.085843 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ff3-account-create-update-bnxqh" event={"ID":"b1332e2d-624b-4d26-a53e-6b62e5bb3a86","Type":"ContainerStarted","Data":"f7a7deea0c1f220ed941b10d2fc02054459f04281c297a02f44db174f31daf1a"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.087080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bvw9p" event={"ID":"3d5841af-e328-4ea6-a184-546676cce0a7","Type":"ContainerStarted","Data":"3529984962e14c01d2f0e5287ef4e18f109559c8868053d3ed73a283ce8a1491"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.089786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" event={"ID":"fc67b702-9dc7-4333-94f4-df82b696021d","Type":"ContainerStarted","Data":"5b5ca4d4f57ba39bdf276a3f7c703dbdfb00b648f17d04bed7286154fb2db551"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.089939 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.092014 4793 generic.go:334] "Generic (PLEG): container finished" podID="a9f641fd-f904-45da-9da3-6e0d1545fc8c" containerID="ad8a20613ce09836981a9d986bb395758f70c978f32ff17b9e81739c2a49e247" exitCode=0 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.092060 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkj8g" event={"ID":"a9f641fd-f904-45da-9da3-6e0d1545fc8c","Type":"ContainerDied","Data":"ad8a20613ce09836981a9d986bb395758f70c978f32ff17b9e81739c2a49e247"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.092080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkj8g" event={"ID":"a9f641fd-f904-45da-9da3-6e0d1545fc8c","Type":"ContainerStarted","Data":"5efe482e172b3e69c7302fe49cee3847085b75f4a811914bf732bdf98c3b98f3"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.094359 4793 generic.go:334] "Generic (PLEG): container finished" podID="4adfee96-44d3-49a6-a577-222950b89117" containerID="c751b029ba524ea0fc5142dcb387c0ba1947526d2dc532adc9d449c609af33ec" exitCode=0 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.094436 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pc6kf" event={"ID":"4adfee96-44d3-49a6-a577-222950b89117","Type":"ContainerDied","Data":"c751b029ba524ea0fc5142dcb387c0ba1947526d2dc532adc9d449c609af33ec"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.094459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pc6kf" event={"ID":"4adfee96-44d3-49a6-a577-222950b89117","Type":"ContainerStarted","Data":"450d68798cb35e477cfc849a8fd281e3dd164218502f5d77cca52ebdfa92fd59"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.100879 4793 generic.go:334] "Generic (PLEG): container finished" podID="47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" containerID="25ab5d7f290182373d631df51b57c68c7fcf8d834539b10055f1d9c7d2d91048" exitCode=0 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.100919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-msx9t" event={"ID":"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674","Type":"ContainerDied","Data":"25ab5d7f290182373d631df51b57c68c7fcf8d834539b10055f1d9c7d2d91048"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.101012 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-msx9t" event={"ID":"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674","Type":"ContainerStarted","Data":"87962386b93b817f8f30c18e951ee781e1c71a7829a4953a2619a7395b33a41e"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.102894 4793 generic.go:334] "Generic (PLEG): container finished" podID="9d5e1df8-743b-47af-aba6-59f7ec4d3c42" containerID="2335d7716b13f7b157210fe0281a7ffc7ec1484fb983b9f4cf3bab2645d5ae73" exitCode=0 Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.102926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dl2rg" event={"ID":"9d5e1df8-743b-47af-aba6-59f7ec4d3c42","Type":"ContainerDied","Data":"2335d7716b13f7b157210fe0281a7ffc7ec1484fb983b9f4cf3bab2645d5ae73"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.102949 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dl2rg" event={"ID":"9d5e1df8-743b-47af-aba6-59f7ec4d3c42","Type":"ContainerStarted","Data":"826b5273114706eca2cceb46293006529895243ee0b2b92c02dc6c6899926008"} Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.136050 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bvw9p" podStartSLOduration=2.194640546 podStartE2EDuration="10.136034832s" podCreationTimestamp="2026-02-17 20:25:38 +0000 UTC" firstStartedPulling="2026-02-17 20:25:39.282443508 +0000 UTC m=+1014.574141829" lastFinishedPulling="2026-02-17 20:25:47.223837764 +0000 UTC m=+1022.515536115" observedRunningTime="2026-02-17 20:25:48.131787416 +0000 UTC m=+1023.423485727" watchObservedRunningTime="2026-02-17 20:25:48.136034832 +0000 UTC m=+1023.427733153" Feb 17 20:25:48 crc kubenswrapper[4793]: I0217 20:25:48.155201 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" podStartSLOduration=12.155183667 podStartE2EDuration="12.155183667s" podCreationTimestamp="2026-02-17 20:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:25:48.151006554 +0000 UTC m=+1023.442704865" watchObservedRunningTime="2026-02-17 20:25:48.155183667 +0000 UTC m=+1023.446881978" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.111147 4793 generic.go:334] "Generic (PLEG): container finished" podID="7d868632-904a-4ba2-8d3a-4e3d0d8de4b0" containerID="9b4628138b328b45c35a6bf8ded74846dae7395f2d9f207ec21a7442dd0fefe9" exitCode=0 Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.111217 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0","Type":"ContainerDied","Data":"9b4628138b328b45c35a6bf8ded74846dae7395f2d9f207ec21a7442dd0fefe9"} Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.114416 4793 generic.go:334] "Generic (PLEG): container finished" podID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerID="2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41" exitCode=0 Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.114520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eaaf278-e1ca-4fbe-ab46-478d8846293d","Type":"ContainerDied","Data":"2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41"} Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.116675 4793 generic.go:334] "Generic (PLEG): container finished" podID="21f7d935-51e8-4a7d-90fe-1248b48d8361" containerID="e9d085884c8427a8fb435f34196daf568e48df87c54b36466f2362bad177edfc" exitCode=0 Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.116959 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a574-account-create-update-vdv59" event={"ID":"21f7d935-51e8-4a7d-90fe-1248b48d8361","Type":"ContainerDied","Data":"e9d085884c8427a8fb435f34196daf568e48df87c54b36466f2362bad177edfc"} Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.454402 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-b7fqg"] Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.456437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.462584 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-b7fqg"] Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.564271 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2080-account-create-update-f57dq"] Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.565218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.567239 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.577238 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2080-account-create-update-f57dq"] Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.587668 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mn97\" (UniqueName: \"kubernetes.io/projected/2814a987-de0c-4a65-a799-2fe73e19f35a-kube-api-access-8mn97\") pod \"glance-db-create-b7fqg\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.587789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2814a987-de0c-4a65-a799-2fe73e19f35a-operator-scripts\") pod \"glance-db-create-b7fqg\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.689856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2814a987-de0c-4a65-a799-2fe73e19f35a-operator-scripts\") pod \"glance-db-create-b7fqg\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.689944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cff25a-643c-4a27-9959-9e0b8602ea29-operator-scripts\") pod \"glance-2080-account-create-update-f57dq\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.690051 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx25s\" (UniqueName: \"kubernetes.io/projected/b3cff25a-643c-4a27-9959-9e0b8602ea29-kube-api-access-dx25s\") pod \"glance-2080-account-create-update-f57dq\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.690095 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mn97\" (UniqueName: \"kubernetes.io/projected/2814a987-de0c-4a65-a799-2fe73e19f35a-kube-api-access-8mn97\") pod \"glance-db-create-b7fqg\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.691709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2814a987-de0c-4a65-a799-2fe73e19f35a-operator-scripts\") pod \"glance-db-create-b7fqg\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.734422 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mn97\" (UniqueName: \"kubernetes.io/projected/2814a987-de0c-4a65-a799-2fe73e19f35a-kube-api-access-8mn97\") pod \"glance-db-create-b7fqg\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.737817 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.777180 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.793148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx25s\" (UniqueName: \"kubernetes.io/projected/b3cff25a-643c-4a27-9959-9e0b8602ea29-kube-api-access-dx25s\") pod \"glance-2080-account-create-update-f57dq\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.793404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cff25a-643c-4a27-9959-9e0b8602ea29-operator-scripts\") pod \"glance-2080-account-create-update-f57dq\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.794844 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cff25a-643c-4a27-9959-9e0b8602ea29-operator-scripts\") pod \"glance-2080-account-create-update-f57dq\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.818263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx25s\" (UniqueName: \"kubernetes.io/projected/b3cff25a-643c-4a27-9959-9e0b8602ea29-kube-api-access-dx25s\") pod \"glance-2080-account-create-update-f57dq\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:49 crc kubenswrapper[4793]: I0217 20:25:49.889368 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.441023 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.507508 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adfee96-44d3-49a6-a577-222950b89117-operator-scripts\") pod \"4adfee96-44d3-49a6-a577-222950b89117\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.507578 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w47nn\" (UniqueName: \"kubernetes.io/projected/4adfee96-44d3-49a6-a577-222950b89117-kube-api-access-w47nn\") pod \"4adfee96-44d3-49a6-a577-222950b89117\" (UID: \"4adfee96-44d3-49a6-a577-222950b89117\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.509063 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.509868 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4adfee96-44d3-49a6-a577-222950b89117-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4adfee96-44d3-49a6-a577-222950b89117" (UID: "4adfee96-44d3-49a6-a577-222950b89117"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.517907 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4adfee96-44d3-49a6-a577-222950b89117-kube-api-access-w47nn" (OuterVolumeSpecName: "kube-api-access-w47nn") pod "4adfee96-44d3-49a6-a577-222950b89117" (UID: "4adfee96-44d3-49a6-a577-222950b89117"). InnerVolumeSpecName "kube-api-access-w47nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.520281 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.554679 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.593377 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.594055 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.605552 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.610388 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjzgp\" (UniqueName: \"kubernetes.io/projected/a9f641fd-f904-45da-9da3-6e0d1545fc8c-kube-api-access-gjzgp\") pod \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.610557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f641fd-f904-45da-9da3-6e0d1545fc8c-operator-scripts\") pod \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\" (UID: \"a9f641fd-f904-45da-9da3-6e0d1545fc8c\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.610603 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td7jt\" (UniqueName: \"kubernetes.io/projected/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-kube-api-access-td7jt\") pod \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.610729 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-operator-scripts\") pod \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\" (UID: \"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.611142 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adfee96-44d3-49a6-a577-222950b89117-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.611161 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w47nn\" (UniqueName: \"kubernetes.io/projected/4adfee96-44d3-49a6-a577-222950b89117-kube-api-access-w47nn\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.614851 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f641fd-f904-45da-9da3-6e0d1545fc8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9f641fd-f904-45da-9da3-6e0d1545fc8c" (UID: "a9f641fd-f904-45da-9da3-6e0d1545fc8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.615462 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" (UID: "47ca7f4a-8678-4fb0-ac5b-726a4cf6a674"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.618647 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f641fd-f904-45da-9da3-6e0d1545fc8c-kube-api-access-gjzgp" (OuterVolumeSpecName: "kube-api-access-gjzgp") pod "a9f641fd-f904-45da-9da3-6e0d1545fc8c" (UID: "a9f641fd-f904-45da-9da3-6e0d1545fc8c"). InnerVolumeSpecName "kube-api-access-gjzgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.619305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-kube-api-access-td7jt" (OuterVolumeSpecName: "kube-api-access-td7jt") pod "47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" (UID: "47ca7f4a-8678-4fb0-ac5b-726a4cf6a674"). InnerVolumeSpecName "kube-api-access-td7jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.711920 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-operator-scripts\") pod \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.711957 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f7d935-51e8-4a7d-90fe-1248b48d8361-operator-scripts\") pod \"21f7d935-51e8-4a7d-90fe-1248b48d8361\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712033 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv9sn\" (UniqueName: \"kubernetes.io/projected/4c5c72e3-f686-46fa-ac5d-684106630bd6-kube-api-access-qv9sn\") pod \"4c5c72e3-f686-46fa-ac5d-684106630bd6\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712089 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlqs9\" (UniqueName: \"kubernetes.io/projected/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-kube-api-access-wlqs9\") pod \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712119 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88nwr\" (UniqueName: \"kubernetes.io/projected/21f7d935-51e8-4a7d-90fe-1248b48d8361-kube-api-access-88nwr\") pod \"21f7d935-51e8-4a7d-90fe-1248b48d8361\" (UID: \"21f7d935-51e8-4a7d-90fe-1248b48d8361\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712224 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5c72e3-f686-46fa-ac5d-684106630bd6-operator-scripts\") pod \"4c5c72e3-f686-46fa-ac5d-684106630bd6\" (UID: \"4c5c72e3-f686-46fa-ac5d-684106630bd6\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712250 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-operator-scripts\") pod \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\" (UID: \"9d5e1df8-743b-47af-aba6-59f7ec4d3c42\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712276 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cf55\" (UniqueName: \"kubernetes.io/projected/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-kube-api-access-6cf55\") pod \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\" (UID: \"b1332e2d-624b-4d26-a53e-6b62e5bb3a86\") " Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712572 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9f641fd-f904-45da-9da3-6e0d1545fc8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712584 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td7jt\" (UniqueName: \"kubernetes.io/projected/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-kube-api-access-td7jt\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712593 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712602 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjzgp\" (UniqueName: \"kubernetes.io/projected/a9f641fd-f904-45da-9da3-6e0d1545fc8c-kube-api-access-gjzgp\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.712718 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1332e2d-624b-4d26-a53e-6b62e5bb3a86" (UID: "b1332e2d-624b-4d26-a53e-6b62e5bb3a86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.713310 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f7d935-51e8-4a7d-90fe-1248b48d8361-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21f7d935-51e8-4a7d-90fe-1248b48d8361" (UID: "21f7d935-51e8-4a7d-90fe-1248b48d8361"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.713621 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d5e1df8-743b-47af-aba6-59f7ec4d3c42" (UID: "9d5e1df8-743b-47af-aba6-59f7ec4d3c42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.713621 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5c72e3-f686-46fa-ac5d-684106630bd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c5c72e3-f686-46fa-ac5d-684106630bd6" (UID: "4c5c72e3-f686-46fa-ac5d-684106630bd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.717191 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5c72e3-f686-46fa-ac5d-684106630bd6-kube-api-access-qv9sn" (OuterVolumeSpecName: "kube-api-access-qv9sn") pod "4c5c72e3-f686-46fa-ac5d-684106630bd6" (UID: "4c5c72e3-f686-46fa-ac5d-684106630bd6"). InnerVolumeSpecName "kube-api-access-qv9sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.722860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-kube-api-access-wlqs9" (OuterVolumeSpecName: "kube-api-access-wlqs9") pod "9d5e1df8-743b-47af-aba6-59f7ec4d3c42" (UID: "9d5e1df8-743b-47af-aba6-59f7ec4d3c42"). InnerVolumeSpecName "kube-api-access-wlqs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.722986 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-kube-api-access-6cf55" (OuterVolumeSpecName: "kube-api-access-6cf55") pod "b1332e2d-624b-4d26-a53e-6b62e5bb3a86" (UID: "b1332e2d-624b-4d26-a53e-6b62e5bb3a86"). InnerVolumeSpecName "kube-api-access-6cf55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.726170 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f7d935-51e8-4a7d-90fe-1248b48d8361-kube-api-access-88nwr" (OuterVolumeSpecName: "kube-api-access-88nwr") pod "21f7d935-51e8-4a7d-90fe-1248b48d8361" (UID: "21f7d935-51e8-4a7d-90fe-1248b48d8361"). InnerVolumeSpecName "kube-api-access-88nwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815070 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5c72e3-f686-46fa-ac5d-684106630bd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815101 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815110 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cf55\" (UniqueName: \"kubernetes.io/projected/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-kube-api-access-6cf55\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815119 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1332e2d-624b-4d26-a53e-6b62e5bb3a86-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815127 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f7d935-51e8-4a7d-90fe-1248b48d8361-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815136 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv9sn\" (UniqueName: \"kubernetes.io/projected/4c5c72e3-f686-46fa-ac5d-684106630bd6-kube-api-access-qv9sn\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815144 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlqs9\" (UniqueName: \"kubernetes.io/projected/9d5e1df8-743b-47af-aba6-59f7ec4d3c42-kube-api-access-wlqs9\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.815152 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88nwr\" (UniqueName: \"kubernetes.io/projected/21f7d935-51e8-4a7d-90fe-1248b48d8361-kube-api-access-88nwr\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.867840 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-b7fqg"] Feb 17 20:25:50 crc kubenswrapper[4793]: I0217 20:25:50.873868 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2080-account-create-update-f57dq"] Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.134456 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-a574-account-create-update-vdv59" event={"ID":"21f7d935-51e8-4a7d-90fe-1248b48d8361","Type":"ContainerDied","Data":"2c40615067b15fabb1ddca8e0cc59a5ca539e9c95dc321dc3f32f0363d2bb295"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.134482 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-a574-account-create-update-vdv59" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.134495 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c40615067b15fabb1ddca8e0cc59a5ca539e9c95dc321dc3f32f0363d2bb295" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.136548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2080-account-create-update-f57dq" event={"ID":"b3cff25a-643c-4a27-9959-9e0b8602ea29","Type":"ContainerStarted","Data":"cb2fd5a33d659fe5cebb14ee4f364176346655f9310daaba79d67ff9d031eabb"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.136613 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2080-account-create-update-f57dq" event={"ID":"b3cff25a-643c-4a27-9959-9e0b8602ea29","Type":"ContainerStarted","Data":"c3c19d291006de06aa8b7228b3458c8b442a472643ea00d1bc70ea419250c10c"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.138617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-eac5-account-create-update-gmxcc" event={"ID":"4c5c72e3-f686-46fa-ac5d-684106630bd6","Type":"ContainerDied","Data":"62a95516d2144d153020dda08edc830b74d4d1b58461a1f6ad5e768ba6ecda1c"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.138634 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-eac5-account-create-update-gmxcc" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.138662 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a95516d2144d153020dda08edc830b74d4d1b58461a1f6ad5e768ba6ecda1c" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.144809 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eaaf278-e1ca-4fbe-ab46-478d8846293d","Type":"ContainerStarted","Data":"2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.145586 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.147331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pc6kf" event={"ID":"4adfee96-44d3-49a6-a577-222950b89117","Type":"ContainerDied","Data":"450d68798cb35e477cfc849a8fd281e3dd164218502f5d77cca52ebdfa92fd59"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.147355 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="450d68798cb35e477cfc849a8fd281e3dd164218502f5d77cca52ebdfa92fd59" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.147405 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pc6kf" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.151679 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-msx9t" event={"ID":"47ca7f4a-8678-4fb0-ac5b-726a4cf6a674","Type":"ContainerDied","Data":"87962386b93b817f8f30c18e951ee781e1c71a7829a4953a2619a7395b33a41e"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.151752 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87962386b93b817f8f30c18e951ee781e1c71a7829a4953a2619a7395b33a41e" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.151882 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-msx9t" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.153230 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ff3-account-create-update-bnxqh" event={"ID":"b1332e2d-624b-4d26-a53e-6b62e5bb3a86","Type":"ContainerDied","Data":"f7a7deea0c1f220ed941b10d2fc02054459f04281c297a02f44db174f31daf1a"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.153278 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7a7deea0c1f220ed941b10d2fc02054459f04281c297a02f44db174f31daf1a" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.153352 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ff3-account-create-update-bnxqh" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.161681 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7fqg" event={"ID":"2814a987-de0c-4a65-a799-2fe73e19f35a","Type":"ContainerStarted","Data":"386862e9b36019e5d1d19f28c86a138dac428b0c379b31d351267b58510492ce"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.161768 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7fqg" event={"ID":"2814a987-de0c-4a65-a799-2fe73e19f35a","Type":"ContainerStarted","Data":"efca7e9da9437239eccb2645db2c083b7bc6dde200efee502acef696cce4b1d0"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.163518 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2080-account-create-update-f57dq" podStartSLOduration=2.163499931 podStartE2EDuration="2.163499931s" podCreationTimestamp="2026-02-17 20:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:25:51.160318982 +0000 UTC m=+1026.452017293" watchObservedRunningTime="2026-02-17 20:25:51.163499931 +0000 UTC m=+1026.455198242" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.165644 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dl2rg" event={"ID":"9d5e1df8-743b-47af-aba6-59f7ec4d3c42","Type":"ContainerDied","Data":"826b5273114706eca2cceb46293006529895243ee0b2b92c02dc6c6899926008"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.165682 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="826b5273114706eca2cceb46293006529895243ee0b2b92c02dc6c6899926008" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.165742 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dl2rg" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.168986 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7d868632-904a-4ba2-8d3a-4e3d0d8de4b0","Type":"ContainerStarted","Data":"bb926e0f23063670adc0befcb8f34a24854e72773f7152ec9999dd2be2fe36b0"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.169817 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.172624 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerStarted","Data":"4da07f15eb1b352937a670b5d4d61ca288e3ed5f1cf539e10846631125528eb8"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.178838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jkj8g" event={"ID":"a9f641fd-f904-45da-9da3-6e0d1545fc8c","Type":"ContainerDied","Data":"5efe482e172b3e69c7302fe49cee3847085b75f4a811914bf732bdf98c3b98f3"} Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.178878 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5efe482e172b3e69c7302fe49cee3847085b75f4a811914bf732bdf98c3b98f3" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.178890 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jkj8g" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.226842 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.045099343 podStartE2EDuration="1m2.226823095s" podCreationTimestamp="2026-02-17 20:24:49 +0000 UTC" firstStartedPulling="2026-02-17 20:25:00.534607888 +0000 UTC m=+975.826306199" lastFinishedPulling="2026-02-17 20:25:15.71633164 +0000 UTC m=+991.008029951" observedRunningTime="2026-02-17 20:25:51.211950885 +0000 UTC m=+1026.503649216" watchObservedRunningTime="2026-02-17 20:25:51.226823095 +0000 UTC m=+1026.518521406" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.242718 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.074985713 podStartE2EDuration="55.242676519s" podCreationTimestamp="2026-02-17 20:24:56 +0000 UTC" firstStartedPulling="2026-02-17 20:25:16.245416278 +0000 UTC m=+991.537114599" lastFinishedPulling="2026-02-17 20:25:50.413107084 +0000 UTC m=+1025.704805405" observedRunningTime="2026-02-17 20:25:51.235728006 +0000 UTC m=+1026.527426317" watchObservedRunningTime="2026-02-17 20:25:51.242676519 +0000 UTC m=+1026.534374830" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.262908 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-b7fqg" podStartSLOduration=2.262891381 podStartE2EDuration="2.262891381s" podCreationTimestamp="2026-02-17 20:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:25:51.260404259 +0000 UTC m=+1026.552102570" watchObservedRunningTime="2026-02-17 20:25:51.262891381 +0000 UTC m=+1026.554589692" Feb 17 20:25:51 crc kubenswrapper[4793]: I0217 20:25:51.289960 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=62.115405726 podStartE2EDuration="1m2.289941523s" podCreationTimestamp="2026-02-17 20:24:49 +0000 UTC" firstStartedPulling="2026-02-17 20:25:15.595321033 +0000 UTC m=+990.887019374" lastFinishedPulling="2026-02-17 20:25:15.76985687 +0000 UTC m=+991.061555171" observedRunningTime="2026-02-17 20:25:51.279516774 +0000 UTC m=+1026.571215105" watchObservedRunningTime="2026-02-17 20:25:51.289941523 +0000 UTC m=+1026.581639834" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.032987 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.093652 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f5477558c-czdv8"] Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.094096 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" podUID="475be634-cf46-4d25-922c-0908c735af65" containerName="dnsmasq-dns" containerID="cri-o://4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d" gracePeriod=10 Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.186539 4793 generic.go:334] "Generic (PLEG): container finished" podID="b3cff25a-643c-4a27-9959-9e0b8602ea29" containerID="cb2fd5a33d659fe5cebb14ee4f364176346655f9310daaba79d67ff9d031eabb" exitCode=0 Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.186598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2080-account-create-update-f57dq" event={"ID":"b3cff25a-643c-4a27-9959-9e0b8602ea29","Type":"ContainerDied","Data":"cb2fd5a33d659fe5cebb14ee4f364176346655f9310daaba79d67ff9d031eabb"} Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.191145 4793 generic.go:334] "Generic (PLEG): container finished" podID="2814a987-de0c-4a65-a799-2fe73e19f35a" containerID="386862e9b36019e5d1d19f28c86a138dac428b0c379b31d351267b58510492ce" exitCode=0 Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.191206 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7fqg" event={"ID":"2814a987-de0c-4a65-a799-2fe73e19f35a","Type":"ContainerDied","Data":"386862e9b36019e5d1d19f28c86a138dac428b0c379b31d351267b58510492ce"} Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.563757 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.752162 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-nb\") pod \"475be634-cf46-4d25-922c-0908c735af65\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.752254 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq5b5\" (UniqueName: \"kubernetes.io/projected/475be634-cf46-4d25-922c-0908c735af65-kube-api-access-cq5b5\") pod \"475be634-cf46-4d25-922c-0908c735af65\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.752323 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-sb\") pod \"475be634-cf46-4d25-922c-0908c735af65\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.752358 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-config\") pod \"475be634-cf46-4d25-922c-0908c735af65\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.752381 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-dns-svc\") pod \"475be634-cf46-4d25-922c-0908c735af65\" (UID: \"475be634-cf46-4d25-922c-0908c735af65\") " Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.760912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475be634-cf46-4d25-922c-0908c735af65-kube-api-access-cq5b5" (OuterVolumeSpecName: "kube-api-access-cq5b5") pod "475be634-cf46-4d25-922c-0908c735af65" (UID: "475be634-cf46-4d25-922c-0908c735af65"). InnerVolumeSpecName "kube-api-access-cq5b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.795102 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "475be634-cf46-4d25-922c-0908c735af65" (UID: "475be634-cf46-4d25-922c-0908c735af65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.801799 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "475be634-cf46-4d25-922c-0908c735af65" (UID: "475be634-cf46-4d25-922c-0908c735af65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.809792 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-config" (OuterVolumeSpecName: "config") pod "475be634-cf46-4d25-922c-0908c735af65" (UID: "475be634-cf46-4d25-922c-0908c735af65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.817906 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "475be634-cf46-4d25-922c-0908c735af65" (UID: "475be634-cf46-4d25-922c-0908c735af65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.854216 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.854253 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq5b5\" (UniqueName: \"kubernetes.io/projected/475be634-cf46-4d25-922c-0908c735af65-kube-api-access-cq5b5\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.854267 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.854277 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.854289 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475be634-cf46-4d25-922c-0908c735af65-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:52 crc kubenswrapper[4793]: I0217 20:25:52.914275 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.200803 4793 generic.go:334] "Generic (PLEG): container finished" podID="475be634-cf46-4d25-922c-0908c735af65" containerID="4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d" exitCode=0 Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.201862 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.206784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" event={"ID":"475be634-cf46-4d25-922c-0908c735af65","Type":"ContainerDied","Data":"4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d"} Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.206853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5477558c-czdv8" event={"ID":"475be634-cf46-4d25-922c-0908c735af65","Type":"ContainerDied","Data":"1a2c0b78bd716143a8b038e96b123b57fc8672d62ce615ddef7f08ce229f6bdf"} Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.206876 4793 scope.go:117] "RemoveContainer" containerID="4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.301951 4793 scope.go:117] "RemoveContainer" containerID="7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.305021 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f5477558c-czdv8"] Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.320227 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f5477558c-czdv8"] Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.361598 4793 scope.go:117] "RemoveContainer" containerID="4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d" Feb 17 20:25:53 crc kubenswrapper[4793]: E0217 20:25:53.362128 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d\": container with ID starting with 4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d not found: ID does not exist" containerID="4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.362162 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d"} err="failed to get container status \"4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d\": rpc error: code = NotFound desc = could not find container \"4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d\": container with ID starting with 4ac9c298bfd406abb79d2b8c469d1e8333b40cb0382d611cceed86f5c9d87e0d not found: ID does not exist" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.362189 4793 scope.go:117] "RemoveContainer" containerID="7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b" Feb 17 20:25:53 crc kubenswrapper[4793]: E0217 20:25:53.362573 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b\": container with ID starting with 7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b not found: ID does not exist" containerID="7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.362601 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b"} err="failed to get container status \"7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b\": rpc error: code = NotFound desc = could not find container \"7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b\": container with ID starting with 7e986052492cff3af73cc5f27e567634f0ff0dc0bc9e57ff757728ca4335588b not found: ID does not exist" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.550930 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475be634-cf46-4d25-922c-0908c735af65" path="/var/lib/kubelet/pods/475be634-cf46-4d25-922c-0908c735af65/volumes" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.668656 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.670404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:25:53 crc kubenswrapper[4793]: E0217 20:25:53.670513 4793 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 20:25:53 crc kubenswrapper[4793]: E0217 20:25:53.670534 4793 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 20:25:53 crc kubenswrapper[4793]: E0217 20:25:53.670574 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift podName:b1695ca3-290a-44c5-8771-146029a6054a nodeName:}" failed. No retries permitted until 2026-02-17 20:26:09.67056069 +0000 UTC m=+1044.962259001 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift") pod "swift-storage-0" (UID: "b1695ca3-290a-44c5-8771-146029a6054a") : configmap "swift-ring-files" not found Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.675233 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.773372 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx25s\" (UniqueName: \"kubernetes.io/projected/b3cff25a-643c-4a27-9959-9e0b8602ea29-kube-api-access-dx25s\") pod \"b3cff25a-643c-4a27-9959-9e0b8602ea29\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.773468 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cff25a-643c-4a27-9959-9e0b8602ea29-operator-scripts\") pod \"b3cff25a-643c-4a27-9959-9e0b8602ea29\" (UID: \"b3cff25a-643c-4a27-9959-9e0b8602ea29\") " Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.773526 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mn97\" (UniqueName: \"kubernetes.io/projected/2814a987-de0c-4a65-a799-2fe73e19f35a-kube-api-access-8mn97\") pod \"2814a987-de0c-4a65-a799-2fe73e19f35a\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.773547 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2814a987-de0c-4a65-a799-2fe73e19f35a-operator-scripts\") pod \"2814a987-de0c-4a65-a799-2fe73e19f35a\" (UID: \"2814a987-de0c-4a65-a799-2fe73e19f35a\") " Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.774344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3cff25a-643c-4a27-9959-9e0b8602ea29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3cff25a-643c-4a27-9959-9e0b8602ea29" (UID: "b3cff25a-643c-4a27-9959-9e0b8602ea29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.774399 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2814a987-de0c-4a65-a799-2fe73e19f35a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2814a987-de0c-4a65-a799-2fe73e19f35a" (UID: "2814a987-de0c-4a65-a799-2fe73e19f35a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.775733 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3cff25a-643c-4a27-9959-9e0b8602ea29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.775753 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2814a987-de0c-4a65-a799-2fe73e19f35a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.777629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2814a987-de0c-4a65-a799-2fe73e19f35a-kube-api-access-8mn97" (OuterVolumeSpecName: "kube-api-access-8mn97") pod "2814a987-de0c-4a65-a799-2fe73e19f35a" (UID: "2814a987-de0c-4a65-a799-2fe73e19f35a"). InnerVolumeSpecName "kube-api-access-8mn97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.777739 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3cff25a-643c-4a27-9959-9e0b8602ea29-kube-api-access-dx25s" (OuterVolumeSpecName: "kube-api-access-dx25s") pod "b3cff25a-643c-4a27-9959-9e0b8602ea29" (UID: "b3cff25a-643c-4a27-9959-9e0b8602ea29"). InnerVolumeSpecName "kube-api-access-dx25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.877226 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx25s\" (UniqueName: \"kubernetes.io/projected/b3cff25a-643c-4a27-9959-9e0b8602ea29-kube-api-access-dx25s\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:53 crc kubenswrapper[4793]: I0217 20:25:53.877515 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mn97\" (UniqueName: \"kubernetes.io/projected/2814a987-de0c-4a65-a799-2fe73e19f35a-kube-api-access-8mn97\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.210954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2080-account-create-update-f57dq" event={"ID":"b3cff25a-643c-4a27-9959-9e0b8602ea29","Type":"ContainerDied","Data":"c3c19d291006de06aa8b7228b3458c8b442a472643ea00d1bc70ea419250c10c"} Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.210996 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c19d291006de06aa8b7228b3458c8b442a472643ea00d1bc70ea419250c10c" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.211050 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2080-account-create-update-f57dq" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.222077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b7fqg" event={"ID":"2814a987-de0c-4a65-a799-2fe73e19f35a","Type":"ContainerDied","Data":"efca7e9da9437239eccb2645db2c083b7bc6dde200efee502acef696cce4b1d0"} Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.222120 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efca7e9da9437239eccb2645db2c083b7bc6dde200efee502acef696cce4b1d0" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.222180 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b7fqg" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.463511 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9pnx5" podUID="330b8b28-b736-42a9-a430-40d75f6ec12d" containerName="ovn-controller" probeResult="failure" output=< Feb 17 20:25:54 crc kubenswrapper[4793]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 20:25:54 crc kubenswrapper[4793]: > Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.479016 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.723505 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jsvrv"] Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.724742 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5c72e3-f686-46fa-ac5d-684106630bd6" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.724802 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5c72e3-f686-46fa-ac5d-684106630bd6" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.724814 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2814a987-de0c-4a65-a799-2fe73e19f35a" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.724821 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2814a987-de0c-4a65-a799-2fe73e19f35a" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.724874 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5e1df8-743b-47af-aba6-59f7ec4d3c42" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.724919 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5e1df8-743b-47af-aba6-59f7ec4d3c42" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.724930 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475be634-cf46-4d25-922c-0908c735af65" containerName="init" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.724935 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="475be634-cf46-4d25-922c-0908c735af65" containerName="init" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.724988 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f7d935-51e8-4a7d-90fe-1248b48d8361" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725045 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f7d935-51e8-4a7d-90fe-1248b48d8361" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.725107 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1332e2d-624b-4d26-a53e-6b62e5bb3a86" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725115 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1332e2d-624b-4d26-a53e-6b62e5bb3a86" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.725166 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3cff25a-643c-4a27-9959-9e0b8602ea29" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725173 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3cff25a-643c-4a27-9959-9e0b8602ea29" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.725226 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f641fd-f904-45da-9da3-6e0d1545fc8c" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725235 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f641fd-f904-45da-9da3-6e0d1545fc8c" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.725285 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475be634-cf46-4d25-922c-0908c735af65" containerName="dnsmasq-dns" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725293 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="475be634-cf46-4d25-922c-0908c735af65" containerName="dnsmasq-dns" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.725343 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4adfee96-44d3-49a6-a577-222950b89117" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725388 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4adfee96-44d3-49a6-a577-222950b89117" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: E0217 20:25:54.725398 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725516 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.725976 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726032 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f641fd-f904-45da-9da3-6e0d1545fc8c" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726080 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5c72e3-f686-46fa-ac5d-684106630bd6" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726089 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4adfee96-44d3-49a6-a577-222950b89117" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726132 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1332e2d-624b-4d26-a53e-6b62e5bb3a86" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726141 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="475be634-cf46-4d25-922c-0908c735af65" containerName="dnsmasq-dns" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726186 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3cff25a-643c-4a27-9959-9e0b8602ea29" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726197 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2814a987-de0c-4a65-a799-2fe73e19f35a" containerName="mariadb-database-create" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726204 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5e1df8-743b-47af-aba6-59f7ec4d3c42" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.726253 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f7d935-51e8-4a7d-90fe-1248b48d8361" containerName="mariadb-account-create-update" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.728360 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.730635 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7vvf" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.734805 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.748487 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jsvrv"] Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.897381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-config-data\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.897424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-db-sync-config-data\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.897446 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9qg\" (UniqueName: \"kubernetes.io/projected/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-kube-api-access-jp9qg\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.897476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-combined-ca-bundle\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.999464 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-db-sync-config-data\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.999507 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp9qg\" (UniqueName: \"kubernetes.io/projected/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-kube-api-access-jp9qg\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.999542 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-combined-ca-bundle\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:54 crc kubenswrapper[4793]: I0217 20:25:54.999664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-config-data\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:55 crc kubenswrapper[4793]: I0217 20:25:55.005537 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-combined-ca-bundle\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:55 crc kubenswrapper[4793]: I0217 20:25:55.005584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-config-data\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:55 crc kubenswrapper[4793]: I0217 20:25:55.016582 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-db-sync-config-data\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:55 crc kubenswrapper[4793]: I0217 20:25:55.020292 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp9qg\" (UniqueName: \"kubernetes.io/projected/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-kube-api-access-jp9qg\") pod \"glance-db-sync-jsvrv\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:55 crc kubenswrapper[4793]: I0217 20:25:55.110630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jsvrv" Feb 17 20:25:55 crc kubenswrapper[4793]: I0217 20:25:55.660402 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jsvrv"] Feb 17 20:25:56 crc kubenswrapper[4793]: I0217 20:25:56.239989 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jsvrv" event={"ID":"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9","Type":"ContainerStarted","Data":"f9a86c3d3aab643985d64403cc4093b027b580a8ec02fd1b2b0d51a0182b2166"} Feb 17 20:25:56 crc kubenswrapper[4793]: I0217 20:25:56.242664 4793 generic.go:334] "Generic (PLEG): container finished" podID="3d5841af-e328-4ea6-a184-546676cce0a7" containerID="3529984962e14c01d2f0e5287ef4e18f109559c8868053d3ed73a283ce8a1491" exitCode=0 Feb 17 20:25:56 crc kubenswrapper[4793]: I0217 20:25:56.242710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bvw9p" event={"ID":"3d5841af-e328-4ea6-a184-546676cce0a7","Type":"ContainerDied","Data":"3529984962e14c01d2f0e5287ef4e18f109559c8868053d3ed73a283ce8a1491"} Feb 17 20:25:56 crc kubenswrapper[4793]: I0217 20:25:56.401512 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dl2rg"] Feb 17 20:25:56 crc kubenswrapper[4793]: I0217 20:25:56.412719 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dl2rg"] Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.548975 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5e1df8-743b-47af-aba6-59f7ec4d3c42" path="/var/lib/kubelet/pods/9d5e1df8-743b-47af-aba6-59f7ec4d3c42/volumes" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.613240 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.742283 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-dispersionconf\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.742642 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-swiftconf\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.742816 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-ring-data-devices\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.742854 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3d5841af-e328-4ea6-a184-546676cce0a7-etc-swift\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.742947 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-combined-ca-bundle\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.743010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-scripts\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.743086 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjdt\" (UniqueName: \"kubernetes.io/projected/3d5841af-e328-4ea6-a184-546676cce0a7-kube-api-access-jbjdt\") pod \"3d5841af-e328-4ea6-a184-546676cce0a7\" (UID: \"3d5841af-e328-4ea6-a184-546676cce0a7\") " Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.743589 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.745515 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d5841af-e328-4ea6-a184-546676cce0a7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.748163 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5841af-e328-4ea6-a184-546676cce0a7-kube-api-access-jbjdt" (OuterVolumeSpecName: "kube-api-access-jbjdt") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "kube-api-access-jbjdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.765510 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.767969 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.768443 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-scripts" (OuterVolumeSpecName: "scripts") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.770049 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3d5841af-e328-4ea6-a184-546676cce0a7" (UID: "3d5841af-e328-4ea6-a184-546676cce0a7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844866 4793 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3d5841af-e328-4ea6-a184-546676cce0a7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844896 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844908 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844917 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjdt\" (UniqueName: \"kubernetes.io/projected/3d5841af-e328-4ea6-a184-546676cce0a7-kube-api-access-jbjdt\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844925 4793 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844935 4793 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3d5841af-e328-4ea6-a184-546676cce0a7-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.844943 4793 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3d5841af-e328-4ea6-a184-546676cce0a7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.914232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 20:25:57 crc kubenswrapper[4793]: I0217 20:25:57.915971 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 20:25:58 crc kubenswrapper[4793]: I0217 20:25:58.266205 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bvw9p" Feb 17 20:25:58 crc kubenswrapper[4793]: I0217 20:25:58.266209 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bvw9p" event={"ID":"3d5841af-e328-4ea6-a184-546676cce0a7","Type":"ContainerDied","Data":"b6749344053c36734687feb2613a91e178e9284b060d3301b2968900cf2473c8"} Feb 17 20:25:58 crc kubenswrapper[4793]: I0217 20:25:58.266263 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6749344053c36734687feb2613a91e178e9284b060d3301b2968900cf2473c8" Feb 17 20:25:58 crc kubenswrapper[4793]: I0217 20:25:58.267547 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.459337 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9pnx5" podUID="330b8b28-b736-42a9-a430-40d75f6ec12d" containerName="ovn-controller" probeResult="failure" output=< Feb 17 20:25:59 crc kubenswrapper[4793]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 20:25:59 crc kubenswrapper[4793]: > Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.484267 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hz6qr" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.697261 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9pnx5-config-xrqmv"] Feb 17 20:25:59 crc kubenswrapper[4793]: E0217 20:25:59.697605 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5841af-e328-4ea6-a184-546676cce0a7" containerName="swift-ring-rebalance" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.697621 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5841af-e328-4ea6-a184-546676cce0a7" containerName="swift-ring-rebalance" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.708232 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5841af-e328-4ea6-a184-546676cce0a7" containerName="swift-ring-rebalance" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.708752 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9pnx5-config-xrqmv"] Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.708835 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.711119 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.884898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run-ovn\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.884982 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kk8\" (UniqueName: \"kubernetes.io/projected/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-kube-api-access-82kk8\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.885153 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.885210 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-log-ovn\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.885244 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-additional-scripts\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.885279 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-scripts\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987001 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run-ovn\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987073 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82kk8\" (UniqueName: \"kubernetes.io/projected/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-kube-api-access-82kk8\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987244 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-log-ovn\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-additional-scripts\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987293 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-scripts\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.987954 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run-ovn\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.988268 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.989710 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-scripts\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.989825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-log-ovn\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:25:59 crc kubenswrapper[4793]: I0217 20:25:59.990917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-additional-scripts\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:26:00 crc kubenswrapper[4793]: I0217 20:26:00.029498 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kk8\" (UniqueName: \"kubernetes.io/projected/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-kube-api-access-82kk8\") pod \"ovn-controller-9pnx5-config-xrqmv\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:26:00 crc kubenswrapper[4793]: I0217 20:26:00.297914 4793 generic.go:334] "Generic (PLEG): container finished" podID="74e2a040-552e-4736-986f-2abac7315e6a" containerID="730d2c79651f19c8eaeeb42546c4e843d9972999df7621c641d7093cb41bc2f8" exitCode=0 Feb 17 20:26:00 crc kubenswrapper[4793]: I0217 20:26:00.298005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74e2a040-552e-4736-986f-2abac7315e6a","Type":"ContainerDied","Data":"730d2c79651f19c8eaeeb42546c4e843d9972999df7621c641d7093cb41bc2f8"} Feb 17 20:26:00 crc kubenswrapper[4793]: I0217 20:26:00.321984 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:26:00 crc kubenswrapper[4793]: I0217 20:26:00.824856 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9pnx5-config-xrqmv"] Feb 17 20:26:00 crc kubenswrapper[4793]: W0217 20:26:00.838423 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5fa7ba4_5c60_4fdd_bcf1_57641733410d.slice/crio-34dd2379eee34e8224e3f2413d8b6d0072881f335db19f6d03229406f3fa4b83 WatchSource:0}: Error finding container 34dd2379eee34e8224e3f2413d8b6d0072881f335db19f6d03229406f3fa4b83: Status 404 returned error can't find the container with id 34dd2379eee34e8224e3f2413d8b6d0072881f335db19f6d03229406f3fa4b83 Feb 17 20:26:00 crc kubenswrapper[4793]: I0217 20:26:00.917533 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.219341 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.219646 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="prometheus" containerID="cri-o://08ff0128703ca9b6446666f3b9419eda34b25209e7662a1d489bf3053f971e40" gracePeriod=600 Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.219723 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="thanos-sidecar" containerID="cri-o://4da07f15eb1b352937a670b5d4d61ca288e3ed5f1cf539e10846631125528eb8" gracePeriod=600 Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.219734 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="config-reloader" containerID="cri-o://dd47aea6262d47e964c05121ff5cb519a9f88297bbcd330c67374c6c404f8f4a" gracePeriod=600 Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.318522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74e2a040-552e-4736-986f-2abac7315e6a","Type":"ContainerStarted","Data":"e9f214418cb401df68682faa9adc2e5dfeea2a0cb1b5bd8cf604fbbd37e27f98"} Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.319458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.320871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-xrqmv" event={"ID":"f5fa7ba4-5c60-4fdd-bcf1-57641733410d","Type":"ContainerStarted","Data":"942d386456522ad7e41c6fb5683786d9090a3e85f41b3da169efc8ee8ba30c57"} Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.320894 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-xrqmv" event={"ID":"f5fa7ba4-5c60-4fdd-bcf1-57641733410d","Type":"ContainerStarted","Data":"34dd2379eee34e8224e3f2413d8b6d0072881f335db19f6d03229406f3fa4b83"} Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.354658 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371964.50014 podStartE2EDuration="1m12.354636871s" podCreationTimestamp="2026-02-17 20:24:49 +0000 UTC" firstStartedPulling="2026-02-17 20:24:51.178089297 +0000 UTC m=+966.469787618" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:01.349984606 +0000 UTC m=+1036.641682917" watchObservedRunningTime="2026-02-17 20:26:01.354636871 +0000 UTC m=+1036.646335172" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.378482 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9pnx5-config-xrqmv" podStartSLOduration=2.378462013 podStartE2EDuration="2.378462013s" podCreationTimestamp="2026-02-17 20:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:01.374391402 +0000 UTC m=+1036.666089723" watchObservedRunningTime="2026-02-17 20:26:01.378462013 +0000 UTC m=+1036.670160324" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.438658 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="7d868632-904a-4ba2-8d3a-4e3d0d8de4b0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.453813 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p8wvm"] Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.455039 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.456835 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.462692 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p8wvm"] Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.617307 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3f15a23-297d-442b-93aa-108559ddfa0d-operator-scripts\") pod \"root-account-create-update-p8wvm\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.617784 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh8fm\" (UniqueName: \"kubernetes.io/projected/a3f15a23-297d-442b-93aa-108559ddfa0d-kube-api-access-kh8fm\") pod \"root-account-create-update-p8wvm\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.719729 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3f15a23-297d-442b-93aa-108559ddfa0d-operator-scripts\") pod \"root-account-create-update-p8wvm\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.719824 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh8fm\" (UniqueName: \"kubernetes.io/projected/a3f15a23-297d-442b-93aa-108559ddfa0d-kube-api-access-kh8fm\") pod \"root-account-create-update-p8wvm\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.720824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3f15a23-297d-442b-93aa-108559ddfa0d-operator-scripts\") pod \"root-account-create-update-p8wvm\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.738806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh8fm\" (UniqueName: \"kubernetes.io/projected/a3f15a23-297d-442b-93aa-108559ddfa0d-kube-api-access-kh8fm\") pod \"root-account-create-update-p8wvm\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:01 crc kubenswrapper[4793]: I0217 20:26:01.768922 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.333985 4793 generic.go:334] "Generic (PLEG): container finished" podID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerID="4da07f15eb1b352937a670b5d4d61ca288e3ed5f1cf539e10846631125528eb8" exitCode=0 Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.334013 4793 generic.go:334] "Generic (PLEG): container finished" podID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerID="dd47aea6262d47e964c05121ff5cb519a9f88297bbcd330c67374c6c404f8f4a" exitCode=0 Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.334022 4793 generic.go:334] "Generic (PLEG): container finished" podID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerID="08ff0128703ca9b6446666f3b9419eda34b25209e7662a1d489bf3053f971e40" exitCode=0 Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.334079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerDied","Data":"4da07f15eb1b352937a670b5d4d61ca288e3ed5f1cf539e10846631125528eb8"} Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.334177 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerDied","Data":"dd47aea6262d47e964c05121ff5cb519a9f88297bbcd330c67374c6c404f8f4a"} Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.334212 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerDied","Data":"08ff0128703ca9b6446666f3b9419eda34b25209e7662a1d489bf3053f971e40"} Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.335340 4793 generic.go:334] "Generic (PLEG): container finished" podID="f5fa7ba4-5c60-4fdd-bcf1-57641733410d" containerID="942d386456522ad7e41c6fb5683786d9090a3e85f41b3da169efc8ee8ba30c57" exitCode=0 Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.335440 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-xrqmv" event={"ID":"f5fa7ba4-5c60-4fdd-bcf1-57641733410d","Type":"ContainerDied","Data":"942d386456522ad7e41c6fb5683786d9090a3e85f41b3da169efc8ee8ba30c57"} Feb 17 20:26:02 crc kubenswrapper[4793]: I0217 20:26:02.915137 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 17 20:26:04 crc kubenswrapper[4793]: I0217 20:26:04.471007 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9pnx5" Feb 17 20:26:07 crc kubenswrapper[4793]: I0217 20:26:07.914973 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.163629 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353370 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-scripts\") pod \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353724 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-log-ovn\") pod \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353779 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run\") pod \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353777 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f5fa7ba4-5c60-4fdd-bcf1-57641733410d" (UID: "f5fa7ba4-5c60-4fdd-bcf1-57641733410d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353816 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run" (OuterVolumeSpecName: "var-run") pod "f5fa7ba4-5c60-4fdd-bcf1-57641733410d" (UID: "f5fa7ba4-5c60-4fdd-bcf1-57641733410d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353866 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-additional-scripts\") pod \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.353939 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82kk8\" (UniqueName: \"kubernetes.io/projected/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-kube-api-access-82kk8\") pod \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.354055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run-ovn\") pod \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\" (UID: \"f5fa7ba4-5c60-4fdd-bcf1-57641733410d\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.354463 4793 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.354498 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f5fa7ba4-5c60-4fdd-bcf1-57641733410d" (UID: "f5fa7ba4-5c60-4fdd-bcf1-57641733410d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.354508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-scripts" (OuterVolumeSpecName: "scripts") pod "f5fa7ba4-5c60-4fdd-bcf1-57641733410d" (UID: "f5fa7ba4-5c60-4fdd-bcf1-57641733410d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.354976 4793 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.359505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f5fa7ba4-5c60-4fdd-bcf1-57641733410d" (UID: "f5fa7ba4-5c60-4fdd-bcf1-57641733410d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.367578 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-kube-api-access-82kk8" (OuterVolumeSpecName: "kube-api-access-82kk8") pod "f5fa7ba4-5c60-4fdd-bcf1-57641733410d" (UID: "f5fa7ba4-5c60-4fdd-bcf1-57641733410d"). InnerVolumeSpecName "kube-api-access-82kk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.389979 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-xrqmv" event={"ID":"f5fa7ba4-5c60-4fdd-bcf1-57641733410d","Type":"ContainerDied","Data":"34dd2379eee34e8224e3f2413d8b6d0072881f335db19f6d03229406f3fa4b83"} Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.390019 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34dd2379eee34e8224e3f2413d8b6d0072881f335db19f6d03229406f3fa4b83" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.390081 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-xrqmv" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.413815 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.451994 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p8wvm"] Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.456409 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.457828 4793 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.457848 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82kk8\" (UniqueName: \"kubernetes.io/projected/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-kube-api-access-82kk8\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.457860 4793 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5fa7ba4-5c60-4fdd-bcf1-57641733410d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.558814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-2\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.559013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.559055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-web-config\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.559080 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-thanos-prometheus-http-client-file\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.559106 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-tls-assets\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.559131 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gclb\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-kube-api-access-4gclb\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.560188 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-0\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.560241 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config-out\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.560289 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.560319 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-1\") pod \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\" (UID: \"489cc350-87d7-42d2-ba31-b4bbc29b5b80\") " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.561769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.561817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.564218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.566575 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-kube-api-access-4gclb" (OuterVolumeSpecName: "kube-api-access-4gclb") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "kube-api-access-4gclb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.567913 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config" (OuterVolumeSpecName: "config") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.567927 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config-out" (OuterVolumeSpecName: "config-out") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.569637 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.573526 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.586093 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "pvc-e06ea14b-3004-4867-b11f-b167457cc525". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.598194 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-web-config" (OuterVolumeSpecName: "web-config") pod "489cc350-87d7-42d2-ba31-b4bbc29b5b80" (UID: "489cc350-87d7-42d2-ba31-b4bbc29b5b80"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.662784 4793 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.662848 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") on node \"crc\" " Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663343 4793 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663372 4793 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663384 4793 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663416 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gclb\" (UniqueName: \"kubernetes.io/projected/489cc350-87d7-42d2-ba31-b4bbc29b5b80-kube-api-access-4gclb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663429 4793 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663437 4793 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663447 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/489cc350-87d7-42d2-ba31-b4bbc29b5b80-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.663457 4793 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/489cc350-87d7-42d2-ba31-b4bbc29b5b80-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.683159 4793 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.683334 4793 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e06ea14b-3004-4867-b11f-b167457cc525" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525") on node "crc" Feb 17 20:26:08 crc kubenswrapper[4793]: I0217 20:26:08.764859 4793 reconciler_common.go:293] "Volume detached for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.276797 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9pnx5-config-xrqmv"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.286778 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9pnx5-config-xrqmv"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.398246 4793 generic.go:334] "Generic (PLEG): container finished" podID="a3f15a23-297d-442b-93aa-108559ddfa0d" containerID="7a47c7deeb58787c35d1a82b7c2e5c9e90e800212db7abff66d92911f77b088a" exitCode=0 Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.398352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p8wvm" event={"ID":"a3f15a23-297d-442b-93aa-108559ddfa0d","Type":"ContainerDied","Data":"7a47c7deeb58787c35d1a82b7c2e5c9e90e800212db7abff66d92911f77b088a"} Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.398385 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p8wvm" event={"ID":"a3f15a23-297d-442b-93aa-108559ddfa0d","Type":"ContainerStarted","Data":"adcf16e4aae5d16622b02864019171049b0fcb5aee6fc42d5988863e81c75cc1"} Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.400081 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jsvrv" event={"ID":"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9","Type":"ContainerStarted","Data":"5c1bf1303fb79e4b4261e4bcbde55dcd26ca7da82fa1c7d7eccb55bc6f0acdd0"} Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.403997 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"489cc350-87d7-42d2-ba31-b4bbc29b5b80","Type":"ContainerDied","Data":"5f07d0bf7a2d4cf925f9cf9c5e606370eb7af70c8665fb6f82fa6a1b70382248"} Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.404060 4793 scope.go:117] "RemoveContainer" containerID="4da07f15eb1b352937a670b5d4d61ca288e3ed5f1cf539e10846631125528eb8" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.404088 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.414775 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9pnx5-config-2vw8k"] Feb 17 20:26:09 crc kubenswrapper[4793]: E0217 20:26:09.415639 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="init-config-reloader" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.415667 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="init-config-reloader" Feb 17 20:26:09 crc kubenswrapper[4793]: E0217 20:26:09.415723 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="thanos-sidecar" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.415734 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="thanos-sidecar" Feb 17 20:26:09 crc kubenswrapper[4793]: E0217 20:26:09.415754 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fa7ba4-5c60-4fdd-bcf1-57641733410d" containerName="ovn-config" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.415765 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fa7ba4-5c60-4fdd-bcf1-57641733410d" containerName="ovn-config" Feb 17 20:26:09 crc kubenswrapper[4793]: E0217 20:26:09.415785 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="config-reloader" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.415796 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="config-reloader" Feb 17 20:26:09 crc kubenswrapper[4793]: E0217 20:26:09.415809 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="prometheus" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.415816 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="prometheus" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.416048 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="thanos-sidecar" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.416081 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="config-reloader" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.416122 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" containerName="prometheus" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.416133 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fa7ba4-5c60-4fdd-bcf1-57641733410d" containerName="ovn-config" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.417278 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.422032 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.428613 4793 scope.go:117] "RemoveContainer" containerID="dd47aea6262d47e964c05121ff5cb519a9f88297bbcd330c67374c6c404f8f4a" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.442280 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9pnx5-config-2vw8k"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.479662 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jsvrv" podStartSLOduration=2.982481857 podStartE2EDuration="15.479645951s" podCreationTimestamp="2026-02-17 20:25:54 +0000 UTC" firstStartedPulling="2026-02-17 20:25:55.671111091 +0000 UTC m=+1030.962809402" lastFinishedPulling="2026-02-17 20:26:08.168275185 +0000 UTC m=+1043.459973496" observedRunningTime="2026-02-17 20:26:09.473313803 +0000 UTC m=+1044.765012114" watchObservedRunningTime="2026-02-17 20:26:09.479645951 +0000 UTC m=+1044.771344272" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.484299 4793 scope.go:117] "RemoveContainer" containerID="08ff0128703ca9b6446666f3b9419eda34b25209e7662a1d489bf3053f971e40" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.533793 4793 scope.go:117] "RemoveContainer" containerID="dbb094fbfdb518973e6d7d1f439a56874ad55a7c23ef160ff15219bf0d004333" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.535458 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.560281 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fa7ba4-5c60-4fdd-bcf1-57641733410d" path="/var/lib/kubelet/pods/f5fa7ba4-5c60-4fdd-bcf1-57641733410d/volumes" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.560922 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.560951 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.563170 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.563305 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.571535 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.577792 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.577841 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ht5\" (UniqueName: \"kubernetes.io/projected/89cb3ea8-62d4-4d08-954b-748164e318a2-kube-api-access-v5ht5\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.577864 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-additional-scripts\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.577896 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-scripts\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.577942 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run-ovn\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.577991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-log-ovn\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582118 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582192 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582246 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582269 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582126 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8pbn7" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582547 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.582609 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.584912 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680256 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680345 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680380 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680415 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680441 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ht5\" (UniqueName: \"kubernetes.io/projected/89cb3ea8-62d4-4d08-954b-748164e318a2-kube-api-access-v5ht5\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680467 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-additional-scripts\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680647 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-scripts\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680781 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680821 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680856 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.680986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681101 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run-ovn\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-log-ovn\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681164 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hs6m\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-kube-api-access-5hs6m\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681224 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-additional-scripts\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run-ovn\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681261 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-log-ovn\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681262 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f41a37ae-4155-4b06-ad0b-46cfe53de634-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681328 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-config\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681380 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.681407 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.685499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-scripts\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.697146 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ht5\" (UniqueName: \"kubernetes.io/projected/89cb3ea8-62d4-4d08-954b-748164e318a2-kube-api-access-v5ht5\") pod \"ovn-controller-9pnx5-config-2vw8k\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.697676 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b1695ca3-290a-44c5-8771-146029a6054a-etc-swift\") pod \"swift-storage-0\" (UID: \"b1695ca3-290a-44c5-8771-146029a6054a\") " pod="openstack/swift-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.745136 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782530 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782589 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782662 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782728 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782749 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782768 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782804 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hs6m\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-kube-api-access-5hs6m\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782841 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f41a37ae-4155-4b06-ad0b-46cfe53de634-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782856 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-config\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782875 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.782891 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.783420 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.784350 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.785138 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.792769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.798887 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-config\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.799449 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.799572 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.799849 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f41a37ae-4155-4b06-ad0b-46cfe53de634-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.800115 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.801131 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.801164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.802471 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.802511 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3174b7bcfd494c95de787fa7079d37ded4941cf895f579caff106d0384cba7de/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.806672 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hs6m\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-kube-api-access-5hs6m\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.837301 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.903251 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:09 crc kubenswrapper[4793]: I0217 20:26:09.904004 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.214855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9pnx5-config-2vw8k"] Feb 17 20:26:10 crc kubenswrapper[4793]: W0217 20:26:10.258044 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89cb3ea8_62d4_4d08_954b_748164e318a2.slice/crio-01e1e0800404d8151162f5fe72995fc65bafecfe45691f447b2c3f7da97fca96 WatchSource:0}: Error finding container 01e1e0800404d8151162f5fe72995fc65bafecfe45691f447b2c3f7da97fca96: Status 404 returned error can't find the container with id 01e1e0800404d8151162f5fe72995fc65bafecfe45691f447b2c3f7da97fca96 Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.406406 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.416275 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-2vw8k" event={"ID":"89cb3ea8-62d4-4d08-954b-748164e318a2","Type":"ContainerStarted","Data":"01e1e0800404d8151162f5fe72995fc65bafecfe45691f447b2c3f7da97fca96"} Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.636543 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.666084 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.786999 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 20:26:10 crc kubenswrapper[4793]: W0217 20:26:10.794613 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1695ca3_290a_44c5_8771_146029a6054a.slice/crio-be39d597bc1fb1c54181b2de0b56a07a4fb63657e0c2f964a95a10ff1d849b0a WatchSource:0}: Error finding container be39d597bc1fb1c54181b2de0b56a07a4fb63657e0c2f964a95a10ff1d849b0a: Status 404 returned error can't find the container with id be39d597bc1fb1c54181b2de0b56a07a4fb63657e0c2f964a95a10ff1d849b0a Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.801797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh8fm\" (UniqueName: \"kubernetes.io/projected/a3f15a23-297d-442b-93aa-108559ddfa0d-kube-api-access-kh8fm\") pod \"a3f15a23-297d-442b-93aa-108559ddfa0d\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.802770 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3f15a23-297d-442b-93aa-108559ddfa0d-operator-scripts\") pod \"a3f15a23-297d-442b-93aa-108559ddfa0d\" (UID: \"a3f15a23-297d-442b-93aa-108559ddfa0d\") " Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.803302 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f15a23-297d-442b-93aa-108559ddfa0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3f15a23-297d-442b-93aa-108559ddfa0d" (UID: "a3f15a23-297d-442b-93aa-108559ddfa0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.803568 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3f15a23-297d-442b-93aa-108559ddfa0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.806803 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f15a23-297d-442b-93aa-108559ddfa0d-kube-api-access-kh8fm" (OuterVolumeSpecName: "kube-api-access-kh8fm") pod "a3f15a23-297d-442b-93aa-108559ddfa0d" (UID: "a3f15a23-297d-442b-93aa-108559ddfa0d"). InnerVolumeSpecName "kube-api-access-kh8fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.905001 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh8fm\" (UniqueName: \"kubernetes.io/projected/a3f15a23-297d-442b-93aa-108559ddfa0d-kube-api-access-kh8fm\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:10 crc kubenswrapper[4793]: I0217 20:26:10.917869 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.441279 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p8wvm" event={"ID":"a3f15a23-297d-442b-93aa-108559ddfa0d","Type":"ContainerDied","Data":"adcf16e4aae5d16622b02864019171049b0fcb5aee6fc42d5988863e81c75cc1"} Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.441684 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adcf16e4aae5d16622b02864019171049b0fcb5aee6fc42d5988863e81c75cc1" Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.441756 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p8wvm" Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.442794 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.458915 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerStarted","Data":"aedbc5bc83a6abb4fdcc78c65427fca2e61ce0b6eb24c0eb46ca3a70e60e5ec4"} Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.464248 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"be39d597bc1fb1c54181b2de0b56a07a4fb63657e0c2f964a95a10ff1d849b0a"} Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.505174 4793 generic.go:334] "Generic (PLEG): container finished" podID="89cb3ea8-62d4-4d08-954b-748164e318a2" containerID="b88d0511f638204c93f87359a8ec83550d26331baa57ad7f6b3092e8d487d2b3" exitCode=0 Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.505244 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-2vw8k" event={"ID":"89cb3ea8-62d4-4d08-954b-748164e318a2","Type":"ContainerDied","Data":"b88d0511f638204c93f87359a8ec83550d26331baa57ad7f6b3092e8d487d2b3"} Feb 17 20:26:11 crc kubenswrapper[4793]: I0217 20:26:11.569604 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489cc350-87d7-42d2-ba31-b4bbc29b5b80" path="/var/lib/kubelet/pods/489cc350-87d7-42d2-ba31-b4bbc29b5b80/volumes" Feb 17 20:26:12 crc kubenswrapper[4793]: I0217 20:26:12.515358 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"6e20cf267f38646ef322d91fe4583a87fcff7c71c87ce96ac5d6dbb090101734"} Feb 17 20:26:12 crc kubenswrapper[4793]: I0217 20:26:12.517000 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"e61ab6ede42925d3f04e86e87d18fbb6f6200ee5598f8a7068e1b3d22428af1d"} Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.006597 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.046276 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run\") pod \"89cb3ea8-62d4-4d08-954b-748164e318a2\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.046353 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-scripts\") pod \"89cb3ea8-62d4-4d08-954b-748164e318a2\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.046460 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-log-ovn\") pod \"89cb3ea8-62d4-4d08-954b-748164e318a2\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.046548 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "89cb3ea8-62d4-4d08-954b-748164e318a2" (UID: "89cb3ea8-62d4-4d08-954b-748164e318a2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.046600 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ht5\" (UniqueName: \"kubernetes.io/projected/89cb3ea8-62d4-4d08-954b-748164e318a2-kube-api-access-v5ht5\") pod \"89cb3ea8-62d4-4d08-954b-748164e318a2\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.046636 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-additional-scripts\") pod \"89cb3ea8-62d4-4d08-954b-748164e318a2\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.047491 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-scripts" (OuterVolumeSpecName: "scripts") pod "89cb3ea8-62d4-4d08-954b-748164e318a2" (UID: "89cb3ea8-62d4-4d08-954b-748164e318a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.047655 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "89cb3ea8-62d4-4d08-954b-748164e318a2" (UID: "89cb3ea8-62d4-4d08-954b-748164e318a2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.047525 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run" (OuterVolumeSpecName: "var-run") pod "89cb3ea8-62d4-4d08-954b-748164e318a2" (UID: "89cb3ea8-62d4-4d08-954b-748164e318a2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.047631 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run-ovn\") pod \"89cb3ea8-62d4-4d08-954b-748164e318a2\" (UID: \"89cb3ea8-62d4-4d08-954b-748164e318a2\") " Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.048142 4793 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.048155 4793 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.048164 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.048173 4793 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/89cb3ea8-62d4-4d08-954b-748164e318a2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.055357 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "89cb3ea8-62d4-4d08-954b-748164e318a2" (UID: "89cb3ea8-62d4-4d08-954b-748164e318a2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.055554 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cb3ea8-62d4-4d08-954b-748164e318a2-kube-api-access-v5ht5" (OuterVolumeSpecName: "kube-api-access-v5ht5") pod "89cb3ea8-62d4-4d08-954b-748164e318a2" (UID: "89cb3ea8-62d4-4d08-954b-748164e318a2"). InnerVolumeSpecName "kube-api-access-v5ht5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.150662 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5ht5\" (UniqueName: \"kubernetes.io/projected/89cb3ea8-62d4-4d08-954b-748164e318a2-kube-api-access-v5ht5\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.150715 4793 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/89cb3ea8-62d4-4d08-954b-748164e318a2-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.528160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9pnx5-config-2vw8k" event={"ID":"89cb3ea8-62d4-4d08-954b-748164e318a2","Type":"ContainerDied","Data":"01e1e0800404d8151162f5fe72995fc65bafecfe45691f447b2c3f7da97fca96"} Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.528196 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e1e0800404d8151162f5fe72995fc65bafecfe45691f447b2c3f7da97fca96" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.528198 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9pnx5-config-2vw8k" Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.533407 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerStarted","Data":"1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03"} Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.548537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"42c5afc27093025a4fdbce99f29c0028a9be1a1063deff072fc354139adf6c83"} Feb 17 20:26:13 crc kubenswrapper[4793]: I0217 20:26:13.548593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"513c3db13c7aba28f4a4a4e3d2c099b37296d6908bbf355a7240976785fc7b20"} Feb 17 20:26:14 crc kubenswrapper[4793]: I0217 20:26:14.091284 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9pnx5-config-2vw8k"] Feb 17 20:26:14 crc kubenswrapper[4793]: I0217 20:26:14.100870 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9pnx5-config-2vw8k"] Feb 17 20:26:14 crc kubenswrapper[4793]: I0217 20:26:14.551066 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"40a9a50d7157856663eaf119a7e245c53b2252eaf75e107cb815042e613ea2b6"} Feb 17 20:26:14 crc kubenswrapper[4793]: I0217 20:26:14.551393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"8ecc769b5debb88fd64b0122ecf65eaa1e76f98e21efb1fb36c9001f79dfe92d"} Feb 17 20:26:14 crc kubenswrapper[4793]: I0217 20:26:14.551405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"40d7e6c8be1508192c8ad5caf02d39bcf04ce17b96a6742d0a9e724399326627"} Feb 17 20:26:15 crc kubenswrapper[4793]: I0217 20:26:15.553085 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cb3ea8-62d4-4d08-954b-748164e318a2" path="/var/lib/kubelet/pods/89cb3ea8-62d4-4d08-954b-748164e318a2/volumes" Feb 17 20:26:15 crc kubenswrapper[4793]: I0217 20:26:15.580425 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"e7c19f8b7f738d2dd6268115976031efff0099a801ee0b889fc37994705422fb"} Feb 17 20:26:15 crc kubenswrapper[4793]: I0217 20:26:15.580476 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"b1af898a67f7c5d918367bcfd01a947331f8d3fd3d09250e12b3690c140319d6"} Feb 17 20:26:16 crc kubenswrapper[4793]: I0217 20:26:16.592376 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"0f2b59151a3c3e3d0c0cd66875081a788651abe59665a431bf887a736268c501"} Feb 17 20:26:16 crc kubenswrapper[4793]: I0217 20:26:16.592726 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"5cca87deb5225248e7119055f3019fd7a314a1a870b127ead3f5fe67954757a9"} Feb 17 20:26:16 crc kubenswrapper[4793]: I0217 20:26:16.592743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"3857823f931b351b4091a61d6310e1527797ce84cd10d4653e6d61fe83391baa"} Feb 17 20:26:16 crc kubenswrapper[4793]: I0217 20:26:16.592753 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"2ca25ced14d0327c4de2d238d9fe97b39d429cb48518ba1ee290a6cbc6183ffd"} Feb 17 20:26:16 crc kubenswrapper[4793]: I0217 20:26:16.592763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"4ed94fa42744b9a6ebe65bc67b84d77b8f6e29114f076ffaf4ec3a601e4f260c"} Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.623455 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b1695ca3-290a-44c5-8771-146029a6054a","Type":"ContainerStarted","Data":"156f3e2829f1e2640a62c747440d99e7bd21efc03462054bb5879f5c024b4599"} Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.629430 4793 generic.go:334] "Generic (PLEG): container finished" podID="63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" containerID="5c1bf1303fb79e4b4261e4bcbde55dcd26ca7da82fa1c7d7eccb55bc6f0acdd0" exitCode=0 Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.629578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jsvrv" event={"ID":"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9","Type":"ContainerDied","Data":"5c1bf1303fb79e4b4261e4bcbde55dcd26ca7da82fa1c7d7eccb55bc6f0acdd0"} Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.695267 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.249285992 podStartE2EDuration="41.695248081s" podCreationTimestamp="2026-02-17 20:25:36 +0000 UTC" firstStartedPulling="2026-02-17 20:26:10.797053457 +0000 UTC m=+1046.088751768" lastFinishedPulling="2026-02-17 20:26:15.243015536 +0000 UTC m=+1050.534713857" observedRunningTime="2026-02-17 20:26:17.680074224 +0000 UTC m=+1052.971772575" watchObservedRunningTime="2026-02-17 20:26:17.695248081 +0000 UTC m=+1052.986946412" Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.998350 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d8bd6dff5-lwr7z"] Feb 17 20:26:17 crc kubenswrapper[4793]: E0217 20:26:17.998772 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cb3ea8-62d4-4d08-954b-748164e318a2" containerName="ovn-config" Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.998794 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cb3ea8-62d4-4d08-954b-748164e318a2" containerName="ovn-config" Feb 17 20:26:17 crc kubenswrapper[4793]: E0217 20:26:17.998829 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f15a23-297d-442b-93aa-108559ddfa0d" containerName="mariadb-account-create-update" Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.998839 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f15a23-297d-442b-93aa-108559ddfa0d" containerName="mariadb-account-create-update" Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.999054 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f15a23-297d-442b-93aa-108559ddfa0d" containerName="mariadb-account-create-update" Feb 17 20:26:17 crc kubenswrapper[4793]: I0217 20:26:17.999074 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cb3ea8-62d4-4d08-954b-748164e318a2" containerName="ovn-config" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.000235 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.002793 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.015585 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8bd6dff5-lwr7z"] Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.034624 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.034778 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-svc\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.034886 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ctlj\" (UniqueName: \"kubernetes.io/projected/f597f001-978e-45d1-a71d-f1fc89624792-kube-api-access-6ctlj\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.034926 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.034993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-config\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.035038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.135799 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.135868 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-svc\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.135923 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ctlj\" (UniqueName: \"kubernetes.io/projected/f597f001-978e-45d1-a71d-f1fc89624792-kube-api-access-6ctlj\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.135942 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.135983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-config\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.136015 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.136932 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-config\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.136995 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.137176 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.137386 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.137638 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-svc\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.157138 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ctlj\" (UniqueName: \"kubernetes.io/projected/f597f001-978e-45d1-a71d-f1fc89624792-kube-api-access-6ctlj\") pod \"dnsmasq-dns-5d8bd6dff5-lwr7z\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.332957 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.601263 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8bd6dff5-lwr7z"] Feb 17 20:26:18 crc kubenswrapper[4793]: W0217 20:26:18.631551 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf597f001_978e_45d1_a71d_f1fc89624792.slice/crio-7788594b47f164ed6f6a01a00541a8f5e3a84af2b8756580995fc9b614de4582 WatchSource:0}: Error finding container 7788594b47f164ed6f6a01a00541a8f5e3a84af2b8756580995fc9b614de4582: Status 404 returned error can't find the container with id 7788594b47f164ed6f6a01a00541a8f5e3a84af2b8756580995fc9b614de4582 Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.641464 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" event={"ID":"f597f001-978e-45d1-a71d-f1fc89624792","Type":"ContainerStarted","Data":"7788594b47f164ed6f6a01a00541a8f5e3a84af2b8756580995fc9b614de4582"} Feb 17 20:26:18 crc kubenswrapper[4793]: I0217 20:26:18.981553 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jsvrv" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.048829 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-config-data\") pod \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.048936 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp9qg\" (UniqueName: \"kubernetes.io/projected/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-kube-api-access-jp9qg\") pod \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.048978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-db-sync-config-data\") pod \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.049078 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-combined-ca-bundle\") pod \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\" (UID: \"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9\") " Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.054633 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" (UID: "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.058192 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-kube-api-access-jp9qg" (OuterVolumeSpecName: "kube-api-access-jp9qg") pod "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" (UID: "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9"). InnerVolumeSpecName "kube-api-access-jp9qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.077226 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" (UID: "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.109462 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-config-data" (OuterVolumeSpecName: "config-data") pod "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" (UID: "63c91f0e-72e7-49ec-b9ee-c68845dd7cf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.152794 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.152829 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.152839 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp9qg\" (UniqueName: \"kubernetes.io/projected/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-kube-api-access-jp9qg\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.152851 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.656097 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jsvrv" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.656128 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jsvrv" event={"ID":"63c91f0e-72e7-49ec-b9ee-c68845dd7cf9","Type":"ContainerDied","Data":"f9a86c3d3aab643985d64403cc4093b027b580a8ec02fd1b2b0d51a0182b2166"} Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.656656 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a86c3d3aab643985d64403cc4093b027b580a8ec02fd1b2b0d51a0182b2166" Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.659197 4793 generic.go:334] "Generic (PLEG): container finished" podID="f597f001-978e-45d1-a71d-f1fc89624792" containerID="be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4" exitCode=0 Feb 17 20:26:19 crc kubenswrapper[4793]: I0217 20:26:19.659266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" event={"ID":"f597f001-978e-45d1-a71d-f1fc89624792","Type":"ContainerDied","Data":"be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4"} Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.071400 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8bd6dff5-lwr7z"] Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.098129 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78895f69c7-4gh4d"] Feb 17 20:26:20 crc kubenswrapper[4793]: E0217 20:26:20.098549 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" containerName="glance-db-sync" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.098569 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" containerName="glance-db-sync" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.098779 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" containerName="glance-db-sync" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.101603 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.101657 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.106549 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.110626 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78895f69c7-4gh4d"] Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.176216 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-swift-storage-0\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.176271 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-svc\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.176291 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-config\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.176309 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4bc\" (UniqueName: \"kubernetes.io/projected/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-kube-api-access-qg4bc\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.176367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-sb\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.176386 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-nb\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.278192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-swift-storage-0\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.278242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-svc\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.278261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-config\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.278282 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg4bc\" (UniqueName: \"kubernetes.io/projected/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-kube-api-access-qg4bc\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.278345 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-sb\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.278366 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-nb\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.279291 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-svc\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.279303 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-nb\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.279405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-swift-storage-0\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.279404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-sb\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.280109 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-config\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.298831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg4bc\" (UniqueName: \"kubernetes.io/projected/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-kube-api-access-qg4bc\") pod \"dnsmasq-dns-78895f69c7-4gh4d\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.431131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.630862 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.673552 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" event={"ID":"f597f001-978e-45d1-a71d-f1fc89624792","Type":"ContainerStarted","Data":"73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19"} Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.673882 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.681020 4793 generic.go:334] "Generic (PLEG): container finished" podID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerID="1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03" exitCode=0 Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.681064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerDied","Data":"1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03"} Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.703137 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" podStartSLOduration=3.703121164 podStartE2EDuration="3.703121164s" podCreationTimestamp="2026-02-17 20:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:20.697874724 +0000 UTC m=+1055.989573035" watchObservedRunningTime="2026-02-17 20:26:20.703121164 +0000 UTC m=+1055.994819475" Feb 17 20:26:20 crc kubenswrapper[4793]: I0217 20:26:20.956571 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78895f69c7-4gh4d"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.031164 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-c9fsh"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.032247 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.040736 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c9fsh"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.145772 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-054f-account-create-update-fh4mx"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.147224 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.157987 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.183540 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054f-account-create-update-fh4mx"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.204298 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4sk\" (UniqueName: \"kubernetes.io/projected/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-kube-api-access-vw4sk\") pod \"cinder-db-create-c9fsh\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.204581 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-operator-scripts\") pod \"cinder-db-create-c9fsh\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.306522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4beafcc-5a98-4860-8527-7c85e85b6eb5-operator-scripts\") pod \"cinder-054f-account-create-update-fh4mx\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.306817 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4sk\" (UniqueName: \"kubernetes.io/projected/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-kube-api-access-vw4sk\") pod \"cinder-db-create-c9fsh\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.306964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphb4\" (UniqueName: \"kubernetes.io/projected/e4beafcc-5a98-4860-8527-7c85e85b6eb5-kube-api-access-dphb4\") pod \"cinder-054f-account-create-update-fh4mx\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.307083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-operator-scripts\") pod \"cinder-db-create-c9fsh\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.307751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-operator-scripts\") pod \"cinder-db-create-c9fsh\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.311589 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-sh9tg"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.312723 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.324989 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sh9tg"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.334416 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4sk\" (UniqueName: \"kubernetes.io/projected/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-kube-api-access-vw4sk\") pod \"cinder-db-create-c9fsh\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.378898 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-srrw9"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.379852 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.384082 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.394058 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.394337 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.398330 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-srrw9"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.405466 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92f9d" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.405725 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.410913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4beafcc-5a98-4860-8527-7c85e85b6eb5-operator-scripts\") pod \"cinder-054f-account-create-update-fh4mx\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.411025 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dq2\" (UniqueName: \"kubernetes.io/projected/5156dde4-196e-492f-a7a0-5c35b403b79c-kube-api-access-s4dq2\") pod \"barbican-db-create-sh9tg\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.411055 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5156dde4-196e-492f-a7a0-5c35b403b79c-operator-scripts\") pod \"barbican-db-create-sh9tg\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.411085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphb4\" (UniqueName: \"kubernetes.io/projected/e4beafcc-5a98-4860-8527-7c85e85b6eb5-kube-api-access-dphb4\") pod \"cinder-054f-account-create-update-fh4mx\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.412112 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4beafcc-5a98-4860-8527-7c85e85b6eb5-operator-scripts\") pod \"cinder-054f-account-create-update-fh4mx\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.429770 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-crnqr"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.431229 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.433513 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.433823 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-zntbz" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.470060 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphb4\" (UniqueName: \"kubernetes.io/projected/e4beafcc-5a98-4860-8527-7c85e85b6eb5-kube-api-access-dphb4\") pod \"cinder-054f-account-create-update-fh4mx\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.481351 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2df3-account-create-update-jksfn"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.482657 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.484284 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.490753 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-crnqr"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.509152 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2df3-account-create-update-jksfn"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.518523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpzg\" (UniqueName: \"kubernetes.io/projected/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-kube-api-access-wvpzg\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.518811 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-db-sync-config-data\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.518917 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb5d2\" (UniqueName: \"kubernetes.io/projected/01675bbf-5d1b-4461-917e-65af0112b569-kube-api-access-vb5d2\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.519022 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dq2\" (UniqueName: \"kubernetes.io/projected/5156dde4-196e-492f-a7a0-5c35b403b79c-kube-api-access-s4dq2\") pod \"barbican-db-create-sh9tg\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.519105 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5156dde4-196e-492f-a7a0-5c35b403b79c-operator-scripts\") pod \"barbican-db-create-sh9tg\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.519186 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-combined-ca-bundle\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.519273 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-config-data\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.519353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-combined-ca-bundle\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.519440 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-config-data\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.520099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5156dde4-196e-492f-a7a0-5c35b403b79c-operator-scripts\") pod \"barbican-db-create-sh9tg\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.547482 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dq2\" (UniqueName: \"kubernetes.io/projected/5156dde4-196e-492f-a7a0-5c35b403b79c-kube-api-access-s4dq2\") pod \"barbican-db-create-sh9tg\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.620413 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jjc56"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621372 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb5d2\" (UniqueName: \"kubernetes.io/projected/01675bbf-5d1b-4461-917e-65af0112b569-kube-api-access-vb5d2\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621458 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6gk\" (UniqueName: \"kubernetes.io/projected/ce31ed9b-5e96-435c-bda8-ab78e42c647f-kube-api-access-rl6gk\") pod \"barbican-2df3-account-create-update-jksfn\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-combined-ca-bundle\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621517 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621518 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-config-data\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621755 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-combined-ca-bundle\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-config-data\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvpzg\" (UniqueName: \"kubernetes.io/projected/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-kube-api-access-wvpzg\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce31ed9b-5e96-435c-bda8-ab78e42c647f-operator-scripts\") pod \"barbican-2df3-account-create-update-jksfn\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.621989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-db-sync-config-data\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.630352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-combined-ca-bundle\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.630382 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-combined-ca-bundle\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.630536 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-db-sync-config-data\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.645358 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-config-data\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.647995 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jjc56"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.651198 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb5d2\" (UniqueName: \"kubernetes.io/projected/01675bbf-5d1b-4461-917e-65af0112b569-kube-api-access-vb5d2\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.658603 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvpzg\" (UniqueName: \"kubernetes.io/projected/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-kube-api-access-wvpzg\") pod \"watcher-db-sync-crnqr\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.684643 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0bda-account-create-update-x2lw8"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.685857 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.689052 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.701202 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerStarted","Data":"d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58"} Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.703431 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-config-data\") pod \"keystone-db-sync-srrw9\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.703955 4793 generic.go:334] "Generic (PLEG): container finished" podID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerID="2609bcb55ef88bb319a5e37ef336a482455bbbf9c6cd8ff5295ce82901e7ec35" exitCode=0 Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.704518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" event={"ID":"7f096f2c-d917-4ddc-92eb-3628f9d1cd73","Type":"ContainerDied","Data":"2609bcb55ef88bb319a5e37ef336a482455bbbf9c6cd8ff5295ce82901e7ec35"} Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.704552 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" event={"ID":"7f096f2c-d917-4ddc-92eb-3628f9d1cd73","Type":"ContainerStarted","Data":"93bfaebcf67102f187096aa7032edc244fd34f858a3a308e300a2a615aa3783f"} Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.704681 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" podUID="f597f001-978e-45d1-a71d-f1fc89624792" containerName="dnsmasq-dns" containerID="cri-o://73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19" gracePeriod=10 Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.717904 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0bda-account-create-update-x2lw8"] Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.723494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgx7\" (UniqueName: \"kubernetes.io/projected/888c0279-526c-49b5-a292-bb66ff8be459-kube-api-access-bpgx7\") pod \"neutron-db-create-jjc56\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.723626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce31ed9b-5e96-435c-bda8-ab78e42c647f-operator-scripts\") pod \"barbican-2df3-account-create-update-jksfn\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.723711 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/888c0279-526c-49b5-a292-bb66ff8be459-operator-scripts\") pod \"neutron-db-create-jjc56\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.723783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl6gk\" (UniqueName: \"kubernetes.io/projected/ce31ed9b-5e96-435c-bda8-ab78e42c647f-kube-api-access-rl6gk\") pod \"barbican-2df3-account-create-update-jksfn\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.725769 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce31ed9b-5e96-435c-bda8-ab78e42c647f-operator-scripts\") pod \"barbican-2df3-account-create-update-jksfn\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.753830 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl6gk\" (UniqueName: \"kubernetes.io/projected/ce31ed9b-5e96-435c-bda8-ab78e42c647f-kube-api-access-rl6gk\") pod \"barbican-2df3-account-create-update-jksfn\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.765629 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.825667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/888c0279-526c-49b5-a292-bb66ff8be459-operator-scripts\") pod \"neutron-db-create-jjc56\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.826123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-operator-scripts\") pod \"neutron-0bda-account-create-update-x2lw8\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.826225 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvxq\" (UniqueName: \"kubernetes.io/projected/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-kube-api-access-ngvxq\") pod \"neutron-0bda-account-create-update-x2lw8\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.826270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgx7\" (UniqueName: \"kubernetes.io/projected/888c0279-526c-49b5-a292-bb66ff8be459-kube-api-access-bpgx7\") pod \"neutron-db-create-jjc56\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.827983 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/888c0279-526c-49b5-a292-bb66ff8be459-operator-scripts\") pod \"neutron-db-create-jjc56\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.836497 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.847094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgx7\" (UniqueName: \"kubernetes.io/projected/888c0279-526c-49b5-a292-bb66ff8be459-kube-api-access-bpgx7\") pod \"neutron-db-create-jjc56\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.893988 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.898353 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.910785 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.928821 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngvxq\" (UniqueName: \"kubernetes.io/projected/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-kube-api-access-ngvxq\") pod \"neutron-0bda-account-create-update-x2lw8\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.928939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-operator-scripts\") pod \"neutron-0bda-account-create-update-x2lw8\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.929537 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-operator-scripts\") pod \"neutron-0bda-account-create-update-x2lw8\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.955548 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngvxq\" (UniqueName: \"kubernetes.io/projected/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-kube-api-access-ngvxq\") pod \"neutron-0bda-account-create-update-x2lw8\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.964088 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:21 crc kubenswrapper[4793]: I0217 20:26:21.986650 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c9fsh"] Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.009074 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.545191 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.638737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-swift-storage-0\") pod \"f597f001-978e-45d1-a71d-f1fc89624792\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.638873 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ctlj\" (UniqueName: \"kubernetes.io/projected/f597f001-978e-45d1-a71d-f1fc89624792-kube-api-access-6ctlj\") pod \"f597f001-978e-45d1-a71d-f1fc89624792\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.638965 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-nb\") pod \"f597f001-978e-45d1-a71d-f1fc89624792\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.639002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-config\") pod \"f597f001-978e-45d1-a71d-f1fc89624792\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.639062 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-sb\") pod \"f597f001-978e-45d1-a71d-f1fc89624792\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.639115 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-svc\") pod \"f597f001-978e-45d1-a71d-f1fc89624792\" (UID: \"f597f001-978e-45d1-a71d-f1fc89624792\") " Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.716786 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c9fsh" event={"ID":"1dc87fa8-cc3f-4e13-8449-ad8338311cf5","Type":"ContainerStarted","Data":"01d848bb2e204e7dac2fe16a6cd7599a5442134574c7cbe89247eecea662ef9d"} Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.716839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c9fsh" event={"ID":"1dc87fa8-cc3f-4e13-8449-ad8338311cf5","Type":"ContainerStarted","Data":"1d9c4239757e9143e46122ae7f0efe9f0863bfc5095628ac35a9392509e84df0"} Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.720486 4793 generic.go:334] "Generic (PLEG): container finished" podID="f597f001-978e-45d1-a71d-f1fc89624792" containerID="73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19" exitCode=0 Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.720540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" event={"ID":"f597f001-978e-45d1-a71d-f1fc89624792","Type":"ContainerDied","Data":"73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19"} Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.720567 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" event={"ID":"f597f001-978e-45d1-a71d-f1fc89624792","Type":"ContainerDied","Data":"7788594b47f164ed6f6a01a00541a8f5e3a84af2b8756580995fc9b614de4582"} Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.720583 4793 scope.go:117] "RemoveContainer" containerID="73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.720711 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8bd6dff5-lwr7z" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.723773 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" event={"ID":"7f096f2c-d917-4ddc-92eb-3628f9d1cd73","Type":"ContainerStarted","Data":"ccb9d33bf3a4285fa2d6f7326460bae6ced65f4fdca00d48f3a80dc8096c9e08"} Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.723843 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.744280 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-c9fsh" podStartSLOduration=1.744263734 podStartE2EDuration="1.744263734s" podCreationTimestamp="2026-02-17 20:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:22.742396558 +0000 UTC m=+1058.034094879" watchObservedRunningTime="2026-02-17 20:26:22.744263734 +0000 UTC m=+1058.035962045" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.773485 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" podStartSLOduration=2.77346498 podStartE2EDuration="2.77346498s" podCreationTimestamp="2026-02-17 20:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:22.767134942 +0000 UTC m=+1058.058833253" watchObservedRunningTime="2026-02-17 20:26:22.77346498 +0000 UTC m=+1058.065163291" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.776203 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f597f001-978e-45d1-a71d-f1fc89624792-kube-api-access-6ctlj" (OuterVolumeSpecName: "kube-api-access-6ctlj") pod "f597f001-978e-45d1-a71d-f1fc89624792" (UID: "f597f001-978e-45d1-a71d-f1fc89624792"). InnerVolumeSpecName "kube-api-access-6ctlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.844794 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ctlj\" (UniqueName: \"kubernetes.io/projected/f597f001-978e-45d1-a71d-f1fc89624792-kube-api-access-6ctlj\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.886539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-config" (OuterVolumeSpecName: "config") pod "f597f001-978e-45d1-a71d-f1fc89624792" (UID: "f597f001-978e-45d1-a71d-f1fc89624792"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.946124 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:22 crc kubenswrapper[4793]: I0217 20:26:22.996314 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f597f001-978e-45d1-a71d-f1fc89624792" (UID: "f597f001-978e-45d1-a71d-f1fc89624792"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.048285 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.108947 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f597f001-978e-45d1-a71d-f1fc89624792" (UID: "f597f001-978e-45d1-a71d-f1fc89624792"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.149572 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.207016 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f597f001-978e-45d1-a71d-f1fc89624792" (UID: "f597f001-978e-45d1-a71d-f1fc89624792"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.233589 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-054f-account-create-update-fh4mx"] Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.242822 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-crnqr"] Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.251196 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.254138 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-sh9tg"] Feb 17 20:26:23 crc kubenswrapper[4793]: W0217 20:26:23.256189 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5156dde4_196e_492f_a7a0_5c35b403b79c.slice/crio-5293f250509364f5c095ac3ccd35bd7e656c867d66dc88b1ab3830e8f4c106ee WatchSource:0}: Error finding container 5293f250509364f5c095ac3ccd35bd7e656c867d66dc88b1ab3830e8f4c106ee: Status 404 returned error can't find the container with id 5293f250509364f5c095ac3ccd35bd7e656c867d66dc88b1ab3830e8f4c106ee Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.270189 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f597f001-978e-45d1-a71d-f1fc89624792" (UID: "f597f001-978e-45d1-a71d-f1fc89624792"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.355412 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f597f001-978e-45d1-a71d-f1fc89624792-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.395835 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2df3-account-create-update-jksfn"] Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.403081 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0bda-account-create-update-x2lw8"] Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.414043 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jjc56"] Feb 17 20:26:23 crc kubenswrapper[4793]: W0217 20:26:23.418588 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod888c0279_526c_49b5_a292_bb66ff8be459.slice/crio-e93bc4c556ebf234c79da1f631eb4d79f6a9693a4a7766d3d9d1e1c0867d36b3 WatchSource:0}: Error finding container e93bc4c556ebf234c79da1f631eb4d79f6a9693a4a7766d3d9d1e1c0867d36b3: Status 404 returned error can't find the container with id e93bc4c556ebf234c79da1f631eb4d79f6a9693a4a7766d3d9d1e1c0867d36b3 Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.423053 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-srrw9"] Feb 17 20:26:23 crc kubenswrapper[4793]: W0217 20:26:23.435317 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b543bb0_a68e_4940_a4dd_ebfd8736d2fe.slice/crio-fb7e25f1bcdb738a6321239d3bb4a4ef1455a3025981d3af2991203d0e9dbe8b WatchSource:0}: Error finding container fb7e25f1bcdb738a6321239d3bb4a4ef1455a3025981d3af2991203d0e9dbe8b: Status 404 returned error can't find the container with id fb7e25f1bcdb738a6321239d3bb4a4ef1455a3025981d3af2991203d0e9dbe8b Feb 17 20:26:23 crc kubenswrapper[4793]: W0217 20:26:23.436492 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce31ed9b_5e96_435c_bda8_ab78e42c647f.slice/crio-10532a10da9a1eaaff666bf7b8291bf477cd1666d9b0b7c2f7850735ecb87c84 WatchSource:0}: Error finding container 10532a10da9a1eaaff666bf7b8291bf477cd1666d9b0b7c2f7850735ecb87c84: Status 404 returned error can't find the container with id 10532a10da9a1eaaff666bf7b8291bf477cd1666d9b0b7c2f7850735ecb87c84 Feb 17 20:26:23 crc kubenswrapper[4793]: W0217 20:26:23.444866 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01675bbf_5d1b_4461_917e_65af0112b569.slice/crio-2e4d026ceca0b180270e31672c427317e34b2fb2a0d2ee675fb67420ebffdfe0 WatchSource:0}: Error finding container 2e4d026ceca0b180270e31672c427317e34b2fb2a0d2ee675fb67420ebffdfe0: Status 404 returned error can't find the container with id 2e4d026ceca0b180270e31672c427317e34b2fb2a0d2ee675fb67420ebffdfe0 Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.516769 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8bd6dff5-lwr7z"] Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.525386 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d8bd6dff5-lwr7z"] Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.549581 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f597f001-978e-45d1-a71d-f1fc89624792" path="/var/lib/kubelet/pods/f597f001-978e-45d1-a71d-f1fc89624792/volumes" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.605732 4793 scope.go:117] "RemoveContainer" containerID="be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.747041 4793 scope.go:117] "RemoveContainer" containerID="73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19" Feb 17 20:26:23 crc kubenswrapper[4793]: E0217 20:26:23.749941 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19\": container with ID starting with 73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19 not found: ID does not exist" containerID="73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.754263 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19"} err="failed to get container status \"73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19\": rpc error: code = NotFound desc = could not find container \"73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19\": container with ID starting with 73d496942675b6ce4ae5ca6ed6a0e9d8bdc04ff158f3be2f6594a13f83c69e19 not found: ID does not exist" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.754333 4793 scope.go:117] "RemoveContainer" containerID="be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4" Feb 17 20:26:23 crc kubenswrapper[4793]: E0217 20:26:23.756960 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4\": container with ID starting with be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4 not found: ID does not exist" containerID="be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.757001 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4"} err="failed to get container status \"be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4\": rpc error: code = NotFound desc = could not find container \"be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4\": container with ID starting with be7106335aebcce6e4c3cfa511bb6f2b284ff2e167a4c85d70297c7fee7d34d4 not found: ID does not exist" Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.779508 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2df3-account-create-update-jksfn" event={"ID":"ce31ed9b-5e96-435c-bda8-ab78e42c647f","Type":"ContainerStarted","Data":"10532a10da9a1eaaff666bf7b8291bf477cd1666d9b0b7c2f7850735ecb87c84"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.782459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0bda-account-create-update-x2lw8" event={"ID":"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe","Type":"ContainerStarted","Data":"fb7e25f1bcdb738a6321239d3bb4a4ef1455a3025981d3af2991203d0e9dbe8b"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.784524 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054f-account-create-update-fh4mx" event={"ID":"e4beafcc-5a98-4860-8527-7c85e85b6eb5","Type":"ContainerStarted","Data":"989b8662a7c18bd29023077f6440beeba73d460768d5fcf7bd045776fb5ddec7"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.786929 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-srrw9" event={"ID":"01675bbf-5d1b-4461-917e-65af0112b569","Type":"ContainerStarted","Data":"2e4d026ceca0b180270e31672c427317e34b2fb2a0d2ee675fb67420ebffdfe0"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.792777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sh9tg" event={"ID":"5156dde4-196e-492f-a7a0-5c35b403b79c","Type":"ContainerStarted","Data":"67f7f8cc1245d022eb9d60d08432a9805fcd4fd8f34e1a98516f8b6f5e3b4f63"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.792826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sh9tg" event={"ID":"5156dde4-196e-492f-a7a0-5c35b403b79c","Type":"ContainerStarted","Data":"5293f250509364f5c095ac3ccd35bd7e656c867d66dc88b1ab3830e8f4c106ee"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.804593 4793 generic.go:334] "Generic (PLEG): container finished" podID="1dc87fa8-cc3f-4e13-8449-ad8338311cf5" containerID="01d848bb2e204e7dac2fe16a6cd7599a5442134574c7cbe89247eecea662ef9d" exitCode=0 Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.804709 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c9fsh" event={"ID":"1dc87fa8-cc3f-4e13-8449-ad8338311cf5","Type":"ContainerDied","Data":"01d848bb2e204e7dac2fe16a6cd7599a5442134574c7cbe89247eecea662ef9d"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.810259 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerStarted","Data":"b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.820357 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jjc56" event={"ID":"888c0279-526c-49b5-a292-bb66ff8be459","Type":"ContainerStarted","Data":"e93bc4c556ebf234c79da1f631eb4d79f6a9693a4a7766d3d9d1e1c0867d36b3"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.823078 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-crnqr" event={"ID":"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2","Type":"ContainerStarted","Data":"0b92ec33f84fecfe00669a3a85575b66e808cdacaa64db2cf140c73505be7144"} Feb 17 20:26:23 crc kubenswrapper[4793]: I0217 20:26:23.835531 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-sh9tg" podStartSLOduration=2.835513561 podStartE2EDuration="2.835513561s" podCreationTimestamp="2026-02-17 20:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:23.819316628 +0000 UTC m=+1059.111014949" watchObservedRunningTime="2026-02-17 20:26:23.835513561 +0000 UTC m=+1059.127211872" Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.844636 4793 generic.go:334] "Generic (PLEG): container finished" podID="5156dde4-196e-492f-a7a0-5c35b403b79c" containerID="67f7f8cc1245d022eb9d60d08432a9805fcd4fd8f34e1a98516f8b6f5e3b4f63" exitCode=0 Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.844700 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sh9tg" event={"ID":"5156dde4-196e-492f-a7a0-5c35b403b79c","Type":"ContainerDied","Data":"67f7f8cc1245d022eb9d60d08432a9805fcd4fd8f34e1a98516f8b6f5e3b4f63"} Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.848006 4793 generic.go:334] "Generic (PLEG): container finished" podID="ce31ed9b-5e96-435c-bda8-ab78e42c647f" containerID="4e6cf63000667a6a635cc3fdbcef536b5596f36e0ba6ef8eb2d21e81a544b629" exitCode=0 Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.848076 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2df3-account-create-update-jksfn" event={"ID":"ce31ed9b-5e96-435c-bda8-ab78e42c647f","Type":"ContainerDied","Data":"4e6cf63000667a6a635cc3fdbcef536b5596f36e0ba6ef8eb2d21e81a544b629"} Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.849867 4793 generic.go:334] "Generic (PLEG): container finished" podID="0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" containerID="196d9afcf3b6d8aa7f632b287c0b087c79252793b083c97bb5c4862d3c549e9d" exitCode=0 Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.849963 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0bda-account-create-update-x2lw8" event={"ID":"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe","Type":"ContainerDied","Data":"196d9afcf3b6d8aa7f632b287c0b087c79252793b083c97bb5c4862d3c549e9d"} Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.851404 4793 generic.go:334] "Generic (PLEG): container finished" podID="e4beafcc-5a98-4860-8527-7c85e85b6eb5" containerID="6d83704960d64a7d3ccb7d477f200a421c42f331ea2e66af0b58a42b944ada70" exitCode=0 Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.851463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054f-account-create-update-fh4mx" event={"ID":"e4beafcc-5a98-4860-8527-7c85e85b6eb5","Type":"ContainerDied","Data":"6d83704960d64a7d3ccb7d477f200a421c42f331ea2e66af0b58a42b944ada70"} Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.862524 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerStarted","Data":"36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688"} Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.869879 4793 generic.go:334] "Generic (PLEG): container finished" podID="888c0279-526c-49b5-a292-bb66ff8be459" containerID="a049ade8c134dfeda3750b794896000259c0ff61abe8dec2610e7ec2bf1fe0c9" exitCode=0 Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.869974 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jjc56" event={"ID":"888c0279-526c-49b5-a292-bb66ff8be459","Type":"ContainerDied","Data":"a049ade8c134dfeda3750b794896000259c0ff61abe8dec2610e7ec2bf1fe0c9"} Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.903477 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.903517 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.909387 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:24 crc kubenswrapper[4793]: I0217 20:26:24.935868 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.935846923 podStartE2EDuration="15.935846923s" podCreationTimestamp="2026-02-17 20:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:24.923819825 +0000 UTC m=+1060.215518146" watchObservedRunningTime="2026-02-17 20:26:24.935846923 +0000 UTC m=+1060.227545234" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.269497 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.415362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-operator-scripts\") pod \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.415403 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw4sk\" (UniqueName: \"kubernetes.io/projected/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-kube-api-access-vw4sk\") pod \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\" (UID: \"1dc87fa8-cc3f-4e13-8449-ad8338311cf5\") " Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.415888 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1dc87fa8-cc3f-4e13-8449-ad8338311cf5" (UID: "1dc87fa8-cc3f-4e13-8449-ad8338311cf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.429174 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-kube-api-access-vw4sk" (OuterVolumeSpecName: "kube-api-access-vw4sk") pod "1dc87fa8-cc3f-4e13-8449-ad8338311cf5" (UID: "1dc87fa8-cc3f-4e13-8449-ad8338311cf5"). InnerVolumeSpecName "kube-api-access-vw4sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.517599 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.517630 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw4sk\" (UniqueName: \"kubernetes.io/projected/1dc87fa8-cc3f-4e13-8449-ad8338311cf5-kube-api-access-vw4sk\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.886828 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c9fsh" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.886880 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c9fsh" event={"ID":"1dc87fa8-cc3f-4e13-8449-ad8338311cf5","Type":"ContainerDied","Data":"1d9c4239757e9143e46122ae7f0efe9f0863bfc5095628ac35a9392509e84df0"} Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.886908 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9c4239757e9143e46122ae7f0efe9f0863bfc5095628ac35a9392509e84df0" Feb 17 20:26:25 crc kubenswrapper[4793]: I0217 20:26:25.895403 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 20:26:28 crc kubenswrapper[4793]: I0217 20:26:28.938320 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2df3-account-create-update-jksfn" event={"ID":"ce31ed9b-5e96-435c-bda8-ab78e42c647f","Type":"ContainerDied","Data":"10532a10da9a1eaaff666bf7b8291bf477cd1666d9b0b7c2f7850735ecb87c84"} Feb 17 20:26:28 crc kubenswrapper[4793]: I0217 20:26:28.938851 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10532a10da9a1eaaff666bf7b8291bf477cd1666d9b0b7c2f7850735ecb87c84" Feb 17 20:26:28 crc kubenswrapper[4793]: I0217 20:26:28.986061 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-054f-account-create-update-fh4mx" event={"ID":"e4beafcc-5a98-4860-8527-7c85e85b6eb5","Type":"ContainerDied","Data":"989b8662a7c18bd29023077f6440beeba73d460768d5fcf7bd045776fb5ddec7"} Feb 17 20:26:28 crc kubenswrapper[4793]: I0217 20:26:28.986101 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989b8662a7c18bd29023077f6440beeba73d460768d5fcf7bd045776fb5ddec7" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.025533 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-sh9tg" event={"ID":"5156dde4-196e-492f-a7a0-5c35b403b79c","Type":"ContainerDied","Data":"5293f250509364f5c095ac3ccd35bd7e656c867d66dc88b1ab3830e8f4c106ee"} Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.025568 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5293f250509364f5c095ac3ccd35bd7e656c867d66dc88b1ab3830e8f4c106ee" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.037120 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.039656 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.050523 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.061190 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204376 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dphb4\" (UniqueName: \"kubernetes.io/projected/e4beafcc-5a98-4860-8527-7c85e85b6eb5-kube-api-access-dphb4\") pod \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204460 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl6gk\" (UniqueName: \"kubernetes.io/projected/ce31ed9b-5e96-435c-bda8-ab78e42c647f-kube-api-access-rl6gk\") pod \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204501 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-operator-scripts\") pod \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204531 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce31ed9b-5e96-435c-bda8-ab78e42c647f-operator-scripts\") pod \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\" (UID: \"ce31ed9b-5e96-435c-bda8-ab78e42c647f\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204643 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dq2\" (UniqueName: \"kubernetes.io/projected/5156dde4-196e-492f-a7a0-5c35b403b79c-kube-api-access-s4dq2\") pod \"5156dde4-196e-492f-a7a0-5c35b403b79c\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204706 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvxq\" (UniqueName: \"kubernetes.io/projected/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-kube-api-access-ngvxq\") pod \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\" (UID: \"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204794 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4beafcc-5a98-4860-8527-7c85e85b6eb5-operator-scripts\") pod \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\" (UID: \"e4beafcc-5a98-4860-8527-7c85e85b6eb5\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.204818 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5156dde4-196e-492f-a7a0-5c35b403b79c-operator-scripts\") pod \"5156dde4-196e-492f-a7a0-5c35b403b79c\" (UID: \"5156dde4-196e-492f-a7a0-5c35b403b79c\") " Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.205231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" (UID: "0b543bb0-a68e-4940-a4dd-ebfd8736d2fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.205571 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.205992 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5156dde4-196e-492f-a7a0-5c35b403b79c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5156dde4-196e-492f-a7a0-5c35b403b79c" (UID: "5156dde4-196e-492f-a7a0-5c35b403b79c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.206260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce31ed9b-5e96-435c-bda8-ab78e42c647f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce31ed9b-5e96-435c-bda8-ab78e42c647f" (UID: "ce31ed9b-5e96-435c-bda8-ab78e42c647f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.206261 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4beafcc-5a98-4860-8527-7c85e85b6eb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4beafcc-5a98-4860-8527-7c85e85b6eb5" (UID: "e4beafcc-5a98-4860-8527-7c85e85b6eb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.270521 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-kube-api-access-ngvxq" (OuterVolumeSpecName: "kube-api-access-ngvxq") pod "0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" (UID: "0b543bb0-a68e-4940-a4dd-ebfd8736d2fe"). InnerVolumeSpecName "kube-api-access-ngvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.270643 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5156dde4-196e-492f-a7a0-5c35b403b79c-kube-api-access-s4dq2" (OuterVolumeSpecName: "kube-api-access-s4dq2") pod "5156dde4-196e-492f-a7a0-5c35b403b79c" (UID: "5156dde4-196e-492f-a7a0-5c35b403b79c"). InnerVolumeSpecName "kube-api-access-s4dq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.270706 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce31ed9b-5e96-435c-bda8-ab78e42c647f-kube-api-access-rl6gk" (OuterVolumeSpecName: "kube-api-access-rl6gk") pod "ce31ed9b-5e96-435c-bda8-ab78e42c647f" (UID: "ce31ed9b-5e96-435c-bda8-ab78e42c647f"). InnerVolumeSpecName "kube-api-access-rl6gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.273892 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4beafcc-5a98-4860-8527-7c85e85b6eb5-kube-api-access-dphb4" (OuterVolumeSpecName: "kube-api-access-dphb4") pod "e4beafcc-5a98-4860-8527-7c85e85b6eb5" (UID: "e4beafcc-5a98-4860-8527-7c85e85b6eb5"). InnerVolumeSpecName "kube-api-access-dphb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307345 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl6gk\" (UniqueName: \"kubernetes.io/projected/ce31ed9b-5e96-435c-bda8-ab78e42c647f-kube-api-access-rl6gk\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307380 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce31ed9b-5e96-435c-bda8-ab78e42c647f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307391 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dq2\" (UniqueName: \"kubernetes.io/projected/5156dde4-196e-492f-a7a0-5c35b403b79c-kube-api-access-s4dq2\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307404 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvxq\" (UniqueName: \"kubernetes.io/projected/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe-kube-api-access-ngvxq\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307416 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4beafcc-5a98-4860-8527-7c85e85b6eb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307426 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5156dde4-196e-492f-a7a0-5c35b403b79c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:29 crc kubenswrapper[4793]: I0217 20:26:29.307437 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dphb4\" (UniqueName: \"kubernetes.io/projected/e4beafcc-5a98-4860-8527-7c85e85b6eb5-kube-api-access-dphb4\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.035664 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-sh9tg" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.035670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0bda-account-create-update-x2lw8" event={"ID":"0b543bb0-a68e-4940-a4dd-ebfd8736d2fe","Type":"ContainerDied","Data":"fb7e25f1bcdb738a6321239d3bb4a4ef1455a3025981d3af2991203d0e9dbe8b"} Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.035748 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7e25f1bcdb738a6321239d3bb4a4ef1455a3025981d3af2991203d0e9dbe8b" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.035758 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0bda-account-create-update-x2lw8" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.035833 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-054f-account-create-update-fh4mx" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.036214 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2df3-account-create-update-jksfn" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.433884 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.511876 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549c9b7879-jznsj"] Feb 17 20:26:30 crc kubenswrapper[4793]: I0217 20:26:30.512229 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="dnsmasq-dns" containerID="cri-o://5b5ca4d4f57ba39bdf276a3f7c703dbdfb00b648f17d04bed7286154fb2db551" gracePeriod=10 Feb 17 20:26:31 crc kubenswrapper[4793]: I0217 20:26:31.047163 4793 generic.go:334] "Generic (PLEG): container finished" podID="fc67b702-9dc7-4333-94f4-df82b696021d" containerID="5b5ca4d4f57ba39bdf276a3f7c703dbdfb00b648f17d04bed7286154fb2db551" exitCode=0 Feb 17 20:26:31 crc kubenswrapper[4793]: I0217 20:26:31.047253 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" event={"ID":"fc67b702-9dc7-4333-94f4-df82b696021d","Type":"ContainerDied","Data":"5b5ca4d4f57ba39bdf276a3f7c703dbdfb00b648f17d04bed7286154fb2db551"} Feb 17 20:26:32 crc kubenswrapper[4793]: I0217 20:26:32.029331 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Feb 17 20:26:32 crc kubenswrapper[4793]: I0217 20:26:32.955926 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.064169 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jjc56" event={"ID":"888c0279-526c-49b5-a292-bb66ff8be459","Type":"ContainerDied","Data":"e93bc4c556ebf234c79da1f631eb4d79f6a9693a4a7766d3d9d1e1c0867d36b3"} Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.064209 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93bc4c556ebf234c79da1f631eb4d79f6a9693a4a7766d3d9d1e1c0867d36b3" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.064228 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jjc56" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.079239 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/888c0279-526c-49b5-a292-bb66ff8be459-operator-scripts\") pod \"888c0279-526c-49b5-a292-bb66ff8be459\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.079968 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c0279-526c-49b5-a292-bb66ff8be459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "888c0279-526c-49b5-a292-bb66ff8be459" (UID: "888c0279-526c-49b5-a292-bb66ff8be459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.080111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgx7\" (UniqueName: \"kubernetes.io/projected/888c0279-526c-49b5-a292-bb66ff8be459-kube-api-access-bpgx7\") pod \"888c0279-526c-49b5-a292-bb66ff8be459\" (UID: \"888c0279-526c-49b5-a292-bb66ff8be459\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.080762 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/888c0279-526c-49b5-a292-bb66ff8be459-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.105130 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c0279-526c-49b5-a292-bb66ff8be459-kube-api-access-bpgx7" (OuterVolumeSpecName: "kube-api-access-bpgx7") pod "888c0279-526c-49b5-a292-bb66ff8be459" (UID: "888c0279-526c-49b5-a292-bb66ff8be459"). InnerVolumeSpecName "kube-api-access-bpgx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.182614 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgx7\" (UniqueName: \"kubernetes.io/projected/888c0279-526c-49b5-a292-bb66ff8be459-kube-api-access-bpgx7\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.746736 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.816203 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-nb\") pod \"fc67b702-9dc7-4333-94f4-df82b696021d\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.816330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-dns-svc\") pod \"fc67b702-9dc7-4333-94f4-df82b696021d\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.816362 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-sb\") pod \"fc67b702-9dc7-4333-94f4-df82b696021d\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.816976 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-config\") pod \"fc67b702-9dc7-4333-94f4-df82b696021d\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.817071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vth64\" (UniqueName: \"kubernetes.io/projected/fc67b702-9dc7-4333-94f4-df82b696021d-kube-api-access-vth64\") pod \"fc67b702-9dc7-4333-94f4-df82b696021d\" (UID: \"fc67b702-9dc7-4333-94f4-df82b696021d\") " Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.820783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc67b702-9dc7-4333-94f4-df82b696021d-kube-api-access-vth64" (OuterVolumeSpecName: "kube-api-access-vth64") pod "fc67b702-9dc7-4333-94f4-df82b696021d" (UID: "fc67b702-9dc7-4333-94f4-df82b696021d"). InnerVolumeSpecName "kube-api-access-vth64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.861348 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-config" (OuterVolumeSpecName: "config") pod "fc67b702-9dc7-4333-94f4-df82b696021d" (UID: "fc67b702-9dc7-4333-94f4-df82b696021d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.868254 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc67b702-9dc7-4333-94f4-df82b696021d" (UID: "fc67b702-9dc7-4333-94f4-df82b696021d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.868552 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc67b702-9dc7-4333-94f4-df82b696021d" (UID: "fc67b702-9dc7-4333-94f4-df82b696021d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.881159 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc67b702-9dc7-4333-94f4-df82b696021d" (UID: "fc67b702-9dc7-4333-94f4-df82b696021d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.917911 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.917945 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.917956 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.917965 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vth64\" (UniqueName: \"kubernetes.io/projected/fc67b702-9dc7-4333-94f4-df82b696021d-kube-api-access-vth64\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:33 crc kubenswrapper[4793]: I0217 20:26:33.917974 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc67b702-9dc7-4333-94f4-df82b696021d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.075848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-srrw9" event={"ID":"01675bbf-5d1b-4461-917e-65af0112b569","Type":"ContainerStarted","Data":"add472df2c083efb00cce6d30514c33b695ed0807db4f72429a61ecc15ff6ae8"} Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.081056 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" event={"ID":"fc67b702-9dc7-4333-94f4-df82b696021d","Type":"ContainerDied","Data":"fee2df938a1bf4d5e8c084bdd81308358f475326afc6e1c781157f8c50bf0db0"} Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.081121 4793 scope.go:117] "RemoveContainer" containerID="5b5ca4d4f57ba39bdf276a3f7c703dbdfb00b648f17d04bed7286154fb2db551" Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.081282 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c9b7879-jznsj" Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.098537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-crnqr" event={"ID":"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2","Type":"ContainerStarted","Data":"7251798803b3ed112c5600803202520e70a3272aff8dd4ebd6a365d6e3ffbf53"} Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.101630 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-srrw9" podStartSLOduration=3.124148453 podStartE2EDuration="13.101612604s" podCreationTimestamp="2026-02-17 20:26:21 +0000 UTC" firstStartedPulling="2026-02-17 20:26:23.452063512 +0000 UTC m=+1058.743761823" lastFinishedPulling="2026-02-17 20:26:33.429527663 +0000 UTC m=+1068.721225974" observedRunningTime="2026-02-17 20:26:34.094576379 +0000 UTC m=+1069.386274700" watchObservedRunningTime="2026-02-17 20:26:34.101612604 +0000 UTC m=+1069.393310925" Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.123550 4793 scope.go:117] "RemoveContainer" containerID="8a5d4be5dbed970881e3100d84aae684886c4f8adcb6fc3dd563152bf6737c83" Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.128820 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-crnqr" podStartSLOduration=2.877857114 podStartE2EDuration="13.12880126s" podCreationTimestamp="2026-02-17 20:26:21 +0000 UTC" firstStartedPulling="2026-02-17 20:26:23.267037765 +0000 UTC m=+1058.558736076" lastFinishedPulling="2026-02-17 20:26:33.517981911 +0000 UTC m=+1068.809680222" observedRunningTime="2026-02-17 20:26:34.114242448 +0000 UTC m=+1069.405940789" watchObservedRunningTime="2026-02-17 20:26:34.12880126 +0000 UTC m=+1069.420499591" Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.151396 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549c9b7879-jznsj"] Feb 17 20:26:34 crc kubenswrapper[4793]: I0217 20:26:34.162574 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549c9b7879-jznsj"] Feb 17 20:26:35 crc kubenswrapper[4793]: I0217 20:26:35.561975 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" path="/var/lib/kubelet/pods/fc67b702-9dc7-4333-94f4-df82b696021d/volumes" Feb 17 20:26:38 crc kubenswrapper[4793]: I0217 20:26:38.136341 4793 generic.go:334] "Generic (PLEG): container finished" podID="685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" containerID="7251798803b3ed112c5600803202520e70a3272aff8dd4ebd6a365d6e3ffbf53" exitCode=0 Feb 17 20:26:38 crc kubenswrapper[4793]: I0217 20:26:38.136637 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-crnqr" event={"ID":"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2","Type":"ContainerDied","Data":"7251798803b3ed112c5600803202520e70a3272aff8dd4ebd6a365d6e3ffbf53"} Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.146630 4793 generic.go:334] "Generic (PLEG): container finished" podID="01675bbf-5d1b-4461-917e-65af0112b569" containerID="add472df2c083efb00cce6d30514c33b695ed0807db4f72429a61ecc15ff6ae8" exitCode=0 Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.146875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-srrw9" event={"ID":"01675bbf-5d1b-4461-917e-65af0112b569","Type":"ContainerDied","Data":"add472df2c083efb00cce6d30514c33b695ed0807db4f72429a61ecc15ff6ae8"} Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.528238 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.714653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-db-sync-config-data\") pod \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.714740 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-combined-ca-bundle\") pod \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.714803 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvpzg\" (UniqueName: \"kubernetes.io/projected/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-kube-api-access-wvpzg\") pod \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.715464 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-config-data\") pod \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\" (UID: \"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2\") " Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.726864 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" (UID: "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.726970 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-kube-api-access-wvpzg" (OuterVolumeSpecName: "kube-api-access-wvpzg") pod "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" (UID: "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2"). InnerVolumeSpecName "kube-api-access-wvpzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.743136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" (UID: "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.789614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-config-data" (OuterVolumeSpecName: "config-data") pod "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" (UID: "685042b7-f2e2-4163-9ee8-7e0dc67d9ec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.817158 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.817200 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.817212 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvpzg\" (UniqueName: \"kubernetes.io/projected/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-kube-api-access-wvpzg\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:39 crc kubenswrapper[4793]: I0217 20:26:39.817226 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.160342 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-crnqr" event={"ID":"685042b7-f2e2-4163-9ee8-7e0dc67d9ec2","Type":"ContainerDied","Data":"0b92ec33f84fecfe00669a3a85575b66e808cdacaa64db2cf140c73505be7144"} Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.160394 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b92ec33f84fecfe00669a3a85575b66e808cdacaa64db2cf140c73505be7144" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.160411 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-crnqr" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.581451 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.736121 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb5d2\" (UniqueName: \"kubernetes.io/projected/01675bbf-5d1b-4461-917e-65af0112b569-kube-api-access-vb5d2\") pod \"01675bbf-5d1b-4461-917e-65af0112b569\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.736263 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-config-data\") pod \"01675bbf-5d1b-4461-917e-65af0112b569\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.736331 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-combined-ca-bundle\") pod \"01675bbf-5d1b-4461-917e-65af0112b569\" (UID: \"01675bbf-5d1b-4461-917e-65af0112b569\") " Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.740943 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01675bbf-5d1b-4461-917e-65af0112b569-kube-api-access-vb5d2" (OuterVolumeSpecName: "kube-api-access-vb5d2") pod "01675bbf-5d1b-4461-917e-65af0112b569" (UID: "01675bbf-5d1b-4461-917e-65af0112b569"). InnerVolumeSpecName "kube-api-access-vb5d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.771681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01675bbf-5d1b-4461-917e-65af0112b569" (UID: "01675bbf-5d1b-4461-917e-65af0112b569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.786219 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-config-data" (OuterVolumeSpecName: "config-data") pod "01675bbf-5d1b-4461-917e-65af0112b569" (UID: "01675bbf-5d1b-4461-917e-65af0112b569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.838307 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.838354 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01675bbf-5d1b-4461-917e-65af0112b569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:40 crc kubenswrapper[4793]: I0217 20:26:40.838378 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb5d2\" (UniqueName: \"kubernetes.io/projected/01675bbf-5d1b-4461-917e-65af0112b569-kube-api-access-vb5d2\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.171653 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-srrw9" event={"ID":"01675bbf-5d1b-4461-917e-65af0112b569","Type":"ContainerDied","Data":"2e4d026ceca0b180270e31672c427317e34b2fb2a0d2ee675fb67420ebffdfe0"} Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.171987 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e4d026ceca0b180270e31672c427317e34b2fb2a0d2ee675fb67420ebffdfe0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.171744 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-srrw9" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.440633 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b649df669-vnfk4"] Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442529 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="init" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442555 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="init" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442583 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="dnsmasq-dns" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442593 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="dnsmasq-dns" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442606 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" containerName="watcher-db-sync" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442612 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" containerName="watcher-db-sync" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442630 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f597f001-978e-45d1-a71d-f1fc89624792" containerName="dnsmasq-dns" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442640 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f597f001-978e-45d1-a71d-f1fc89624792" containerName="dnsmasq-dns" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442652 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4beafcc-5a98-4860-8527-7c85e85b6eb5" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442661 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4beafcc-5a98-4860-8527-7c85e85b6eb5" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442679 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01675bbf-5d1b-4461-917e-65af0112b569" containerName="keystone-db-sync" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442701 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="01675bbf-5d1b-4461-917e-65af0112b569" containerName="keystone-db-sync" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442715 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5156dde4-196e-492f-a7a0-5c35b403b79c" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442721 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5156dde4-196e-492f-a7a0-5c35b403b79c" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442735 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c0279-526c-49b5-a292-bb66ff8be459" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442741 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c0279-526c-49b5-a292-bb66ff8be459" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442751 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442757 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442768 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce31ed9b-5e96-435c-bda8-ab78e42c647f" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442774 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce31ed9b-5e96-435c-bda8-ab78e42c647f" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442784 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f597f001-978e-45d1-a71d-f1fc89624792" containerName="init" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442790 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f597f001-978e-45d1-a71d-f1fc89624792" containerName="init" Feb 17 20:26:41 crc kubenswrapper[4793]: E0217 20:26:41.442806 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc87fa8-cc3f-4e13-8449-ad8338311cf5" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.442812 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc87fa8-cc3f-4e13-8449-ad8338311cf5" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443150 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4beafcc-5a98-4860-8527-7c85e85b6eb5" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443181 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" containerName="watcher-db-sync" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443212 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443227 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce31ed9b-5e96-435c-bda8-ab78e42c647f" containerName="mariadb-account-create-update" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443248 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c0279-526c-49b5-a292-bb66ff8be459" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443268 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc87fa8-cc3f-4e13-8449-ad8338311cf5" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443283 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5156dde4-196e-492f-a7a0-5c35b403b79c" containerName="mariadb-database-create" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443303 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f597f001-978e-45d1-a71d-f1fc89624792" containerName="dnsmasq-dns" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443323 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="01675bbf-5d1b-4461-917e-65af0112b569" containerName="keystone-db-sync" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.443344 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc67b702-9dc7-4333-94f4-df82b696021d" containerName="dnsmasq-dns" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.444716 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.492349 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8xslw"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.493452 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.496740 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.496912 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.497007 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92f9d" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.497189 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.498468 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.515774 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b649df669-vnfk4"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.537833 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8xslw"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-sb\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552248 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-config\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552272 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-svc\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-swift-storage-0\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552319 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt892\" (UniqueName: \"kubernetes.io/projected/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-kube-api-access-pt892\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552361 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-credential-keys\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-combined-ca-bundle\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552403 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxw9\" (UniqueName: \"kubernetes.io/projected/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-kube-api-access-tlxw9\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552430 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-fernet-keys\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552447 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-nb\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552474 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-config-data\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.552505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-scripts\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.608747 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.610012 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.614856 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-zntbz" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.614967 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.625839 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.627153 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.634301 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.654178 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.654833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlxw9\" (UniqueName: \"kubernetes.io/projected/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-kube-api-access-tlxw9\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.654884 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-fernet-keys\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.654914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-nb\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.654952 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-config-data\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.654993 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-scripts\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655025 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-sb\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655056 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-config\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-svc\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655103 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-swift-storage-0\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655126 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt892\" (UniqueName: \"kubernetes.io/projected/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-kube-api-access-pt892\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655152 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-credential-keys\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.655172 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-combined-ca-bundle\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.657518 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-svc\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.657655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-sb\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.658040 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-config\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.658398 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-swift-storage-0\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.658978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-nb\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.665063 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.667716 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-combined-ca-bundle\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.673394 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-fernet-keys\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.675174 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.676588 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.678056 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-scripts\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.679770 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.693891 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.708599 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-config-data\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.709716 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-credential-keys\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.712752 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c8484b4ff-dd8mt"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.716773 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.719305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlxw9\" (UniqueName: \"kubernetes.io/projected/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-kube-api-access-tlxw9\") pod \"keystone-bootstrap-8xslw\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.720260 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.720434 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.720716 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-66kcg" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.720829 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.724427 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt892\" (UniqueName: \"kubernetes.io/projected/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-kube-api-access-pt892\") pod \"dnsmasq-dns-b649df669-vnfk4\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.730735 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c8484b4ff-dd8mt"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.761523 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-config-data\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.761768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmhm9\" (UniqueName: \"kubernetes.io/projected/06ecbc8e-aa9f-4025-883d-65e4c000d986-kube-api-access-nmhm9\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.761959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ecbc8e-aa9f-4025-883d-65e4c000d986-logs\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.762074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.762141 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnzth\" (UniqueName: \"kubernetes.io/projected/b958eb77-11cb-4049-a3db-11e838dfa0f5-kube-api-access-gnzth\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.762299 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.762385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.762464 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b958eb77-11cb-4049-a3db-11e838dfa0f5-logs\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.762562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.777533 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.812119 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.815829 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-v2gvx"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.816946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.822136 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.825505 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qf6ld" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.825745 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866671 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90727286-0f2a-4930-8993-4b139e519e19-logs\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/052d4a12-074f-4d37-b5d8-71cbe27550bb-horizon-secret-key\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866779 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ecbc8e-aa9f-4025-883d-65e4c000d986-logs\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866809 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866831 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-config\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866847 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866868 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnzth\" (UniqueName: \"kubernetes.io/projected/b958eb77-11cb-4049-a3db-11e838dfa0f5-kube-api-access-gnzth\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866891 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052d4a12-074f-4d37-b5d8-71cbe27550bb-logs\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-config-data\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866937 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-config-data\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzspp\" (UniqueName: \"kubernetes.io/projected/052d4a12-074f-4d37-b5d8-71cbe27550bb-kube-api-access-wzspp\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866982 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-combined-ca-bundle\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.866998 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867019 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867049 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b958eb77-11cb-4049-a3db-11e838dfa0f5-logs\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867071 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-scripts\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867090 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867109 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7l64\" (UniqueName: \"kubernetes.io/projected/90727286-0f2a-4930-8993-4b139e519e19-kube-api-access-m7l64\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867141 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-config-data\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867160 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvb9\" (UniqueName: \"kubernetes.io/projected/0af0e3fd-140e-472e-8438-03bfd116c17f-kube-api-access-skvb9\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmhm9\" (UniqueName: \"kubernetes.io/projected/06ecbc8e-aa9f-4025-883d-65e4c000d986-kube-api-access-nmhm9\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867198 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.867878 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ecbc8e-aa9f-4025-883d-65e4c000d986-logs\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.873563 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.874961 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.875630 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b958eb77-11cb-4049-a3db-11e838dfa0f5-logs\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.875424 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-config-data\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.879611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.886914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.909599 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmhm9\" (UniqueName: \"kubernetes.io/projected/06ecbc8e-aa9f-4025-883d-65e4c000d986-kube-api-access-nmhm9\") pod \"watcher-applier-0\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.929383 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnzth\" (UniqueName: \"kubernetes.io/projected/b958eb77-11cb-4049-a3db-11e838dfa0f5-kube-api-access-gnzth\") pod \"watcher-decision-engine-0\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.937352 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.939400 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.942899 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.947348 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.950062 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.950918 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-v2gvx"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.959815 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-t9mdr"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.960814 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.961849 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.962952 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l2xdk" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.963092 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.966966 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-t9mdr"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968392 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvb9\" (UniqueName: \"kubernetes.io/projected/0af0e3fd-140e-472e-8438-03bfd116c17f-kube-api-access-skvb9\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968429 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90727286-0f2a-4930-8993-4b139e519e19-logs\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/052d4a12-074f-4d37-b5d8-71cbe27550bb-horizon-secret-key\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968524 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-config\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052d4a12-074f-4d37-b5d8-71cbe27550bb-logs\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968597 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-config-data\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968654 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-config-data\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968712 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzspp\" (UniqueName: \"kubernetes.io/projected/052d4a12-074f-4d37-b5d8-71cbe27550bb-kube-api-access-wzspp\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968735 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-combined-ca-bundle\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-scripts\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.968792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7l64\" (UniqueName: \"kubernetes.io/projected/90727286-0f2a-4930-8993-4b139e519e19-kube-api-access-m7l64\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.969922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.970540 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052d4a12-074f-4d37-b5d8-71cbe27550bb-logs\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.972547 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-scripts\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.973260 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90727286-0f2a-4930-8993-4b139e519e19-logs\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.975061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-config-data\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.981404 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-v8sgz"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.984124 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.986726 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.991825 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:26:41 crc kubenswrapper[4793]: I0217 20:26:41.994895 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-config-data\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.004654 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/052d4a12-074f-4d37-b5d8-71cbe27550bb-horizon-secret-key\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.005196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-combined-ca-bundle\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.006293 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.006746 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.006747 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-config\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.006870 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-64fgn" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.016369 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d49f49cc7-24rvs"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.020518 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.032920 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v8sgz"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.035277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzspp\" (UniqueName: \"kubernetes.io/projected/052d4a12-074f-4d37-b5d8-71cbe27550bb-kube-api-access-wzspp\") pod \"horizon-6c8484b4ff-dd8mt\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.035721 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7l64\" (UniqueName: \"kubernetes.io/projected/90727286-0f2a-4930-8993-4b139e519e19-kube-api-access-m7l64\") pod \"watcher-api-0\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " pod="openstack/watcher-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.039959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvb9\" (UniqueName: \"kubernetes.io/projected/0af0e3fd-140e-472e-8438-03bfd116c17f-kube-api-access-skvb9\") pod \"neutron-db-sync-v2gvx\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.045253 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d49f49cc7-24rvs"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.059947 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t8llz"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.061305 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.065001 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.065143 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hbw8s" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.065385 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-db-sync-config-data\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070497 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-log-httpd\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98mr\" (UniqueName: \"kubernetes.io/projected/95da6bd5-17d8-4402-bb8a-87b0c03feebf-kube-api-access-v98mr\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-run-httpd\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070557 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-config-data\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070719 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95da6bd5-17d8-4402-bb8a-87b0c03feebf-etc-machine-id\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-scripts\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.070767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.071150 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.071181 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-combined-ca-bundle\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.071195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvq2m\" (UniqueName: \"kubernetes.io/projected/78ebba06-604c-4fb6-91b1-3727324ca4a8-kube-api-access-rvq2m\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.071214 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-config-data\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.071228 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-scripts\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.071427 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b649df669-vnfk4"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177329 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-logs\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177386 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98mr\" (UniqueName: \"kubernetes.io/projected/95da6bd5-17d8-4402-bb8a-87b0c03feebf-kube-api-access-v98mr\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177428 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-scripts\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177494 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-config-data\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177520 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-config-data\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177550 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-run-httpd\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177584 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-config-data\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177603 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-db-sync-config-data\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177623 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h2n8\" (UniqueName: \"kubernetes.io/projected/c5b5195f-6b92-4704-a57d-d308ac7e8b28-kube-api-access-7h2n8\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177705 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-combined-ca-bundle\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177759 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95da6bd5-17d8-4402-bb8a-87b0c03feebf-etc-machine-id\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177822 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-scripts\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.177952 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178023 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5b5195f-6b92-4704-a57d-d308ac7e8b28-horizon-secret-key\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178089 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v274v\" (UniqueName: \"kubernetes.io/projected/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-kube-api-access-v274v\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178108 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-scripts\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-combined-ca-bundle\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvq2m\" (UniqueName: \"kubernetes.io/projected/78ebba06-604c-4fb6-91b1-3727324ca4a8-kube-api-access-rvq2m\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178215 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-config-data\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178232 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5195f-6b92-4704-a57d-d308ac7e8b28-logs\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178258 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-scripts\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-combined-ca-bundle\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178715 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hpm\" (UniqueName: \"kubernetes.io/projected/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-kube-api-access-q7hpm\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.178806 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-db-sync-config-data\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.183409 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-log-httpd\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.185742 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-db-sync-config-data\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.187204 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95da6bd5-17d8-4402-bb8a-87b0c03feebf-etc-machine-id\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.189946 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-log-httpd\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.191748 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-run-httpd\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.194088 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-config-data\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.195205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-combined-ca-bundle\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.195263 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.196379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-scripts\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.201670 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-config-data\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.202464 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.222828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-scripts\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.226151 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98mr\" (UniqueName: \"kubernetes.io/projected/95da6bd5-17d8-4402-bb8a-87b0c03feebf-kube-api-access-v98mr\") pod \"cinder-db-sync-t9mdr\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.227199 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8llz"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.229799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvq2m\" (UniqueName: \"kubernetes.io/projected/78ebba06-604c-4fb6-91b1-3727324ca4a8-kube-api-access-rvq2m\") pod \"ceilometer-0\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.251411 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.278019 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.281132 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59df4fdd5-c42m5"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.283445 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-logs\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295575 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-scripts\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295602 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-config-data\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295623 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-config-data\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295734 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-db-sync-config-data\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h2n8\" (UniqueName: \"kubernetes.io/projected/c5b5195f-6b92-4704-a57d-d308ac7e8b28-kube-api-access-7h2n8\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.295801 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-combined-ca-bundle\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.296012 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-logs\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.299517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-config-data\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.300532 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5b5195f-6b92-4704-a57d-d308ac7e8b28-horizon-secret-key\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.300600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v274v\" (UniqueName: \"kubernetes.io/projected/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-kube-api-access-v274v\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.300620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-scripts\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.300698 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5195f-6b92-4704-a57d-d308ac7e8b28-logs\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.300746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-combined-ca-bundle\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.300791 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hpm\" (UniqueName: \"kubernetes.io/projected/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-kube-api-access-q7hpm\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.303017 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-scripts\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.303267 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5195f-6b92-4704-a57d-d308ac7e8b28-logs\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.304772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-config-data\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.312272 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.312297 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-combined-ca-bundle\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.315894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-db-sync-config-data\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.316587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-combined-ca-bundle\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.316985 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5b5195f-6b92-4704-a57d-d308ac7e8b28-horizon-secret-key\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.316988 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-scripts\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.322934 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hpm\" (UniqueName: \"kubernetes.io/projected/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-kube-api-access-q7hpm\") pod \"placement-db-sync-t8llz\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.329708 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v274v\" (UniqueName: \"kubernetes.io/projected/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-kube-api-access-v274v\") pod \"barbican-db-sync-v8sgz\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.330741 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h2n8\" (UniqueName: \"kubernetes.io/projected/c5b5195f-6b92-4704-a57d-d308ac7e8b28-kube-api-access-7h2n8\") pod \"horizon-6d49f49cc7-24rvs\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.352905 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59df4fdd5-c42m5"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.368424 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.369028 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.406643 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-config\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.406736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-nb\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.406884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-sb\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.406951 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-svc\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.406966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8t5\" (UniqueName: \"kubernetes.io/projected/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-kube-api-access-lg8t5\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.407025 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-swift-storage-0\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.410323 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.425805 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.427351 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.433675 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7vvf" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.434039 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.434160 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.434284 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.438672 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.455395 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.488580 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.491054 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.494656 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.495014 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.497473 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.508774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-swift-storage-0\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.508862 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-config\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.508894 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-nb\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.508950 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-sb\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.508993 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-svc\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.509012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg8t5\" (UniqueName: \"kubernetes.io/projected/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-kube-api-access-lg8t5\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.510497 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-swift-storage-0\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.510645 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8llz" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.512288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-config\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.512877 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-nb\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.517133 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-svc\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.517597 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-sb\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.534830 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg8t5\" (UniqueName: \"kubernetes.io/projected/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-kube-api-access-lg8t5\") pod \"dnsmasq-dns-59df4fdd5-c42m5\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.612969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpqql\" (UniqueName: \"kubernetes.io/projected/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-kube-api-access-hpqql\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613333 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613361 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613381 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613409 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613425 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613461 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613513 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-logs\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613531 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc4fg\" (UniqueName: \"kubernetes.io/projected/becfe481-885b-4302-9265-d9de06b4a2c1-kube-api-access-pc4fg\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613598 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613613 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613634 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.613658 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.624218 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.655535 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8xslw"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.677842 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b649df669-vnfk4"] Feb 17 20:26:42 crc kubenswrapper[4793]: W0217 20:26:42.685285 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7aed618_93d7_4bc0_a99d_e4d03be8fbfd.slice/crio-1bc16530927bb7e0829eea72836645c43171cfc550e8f4e1b8dc7459dc35e882 WatchSource:0}: Error finding container 1bc16530927bb7e0829eea72836645c43171cfc550e8f4e1b8dc7459dc35e882: Status 404 returned error can't find the container with id 1bc16530927bb7e0829eea72836645c43171cfc550e8f4e1b8dc7459dc35e882 Feb 17 20:26:42 crc kubenswrapper[4793]: W0217 20:26:42.693804 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice/crio-8551bcf13ca4a4783331c1bc323333b1a8a20a1189c7573f2c32c6f5dd13f76c WatchSource:0}: Error finding container 8551bcf13ca4a4783331c1bc323333b1a8a20a1189c7573f2c32c6f5dd13f76c: Status 404 returned error can't find the container with id 8551bcf13ca4a4783331c1bc323333b1a8a20a1189c7573f2c32c6f5dd13f76c Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.715578 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpqql\" (UniqueName: \"kubernetes.io/projected/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-kube-api-access-hpqql\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.715918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.716031 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.716120 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.716332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.716407 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.716521 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717234 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-logs\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717380 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717441 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717510 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc4fg\" (UniqueName: \"kubernetes.io/projected/becfe481-885b-4302-9265-d9de06b4a2c1-kube-api-access-pc4fg\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.717946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.720325 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.722446 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.722780 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-logs\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.722878 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.723166 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.723654 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.756696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.778533 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.779659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.780025 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.800077 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.800889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.809801 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.810455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.813531 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc4fg\" (UniqueName: \"kubernetes.io/projected/becfe481-885b-4302-9265-d9de06b4a2c1-kube-api-access-pc4fg\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.820215 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpqql\" (UniqueName: \"kubernetes.io/projected/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-kube-api-access-hpqql\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.820509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " pod="openstack/glance-default-external-api-0" Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.945114 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 20:26:42 crc kubenswrapper[4793]: I0217 20:26:42.947050 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.011157 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.013081 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.060236 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.120203 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.278813 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"b2292ac6d886eac4ab9a1d439a8f84c5aeab8ebf7f9d482054408ecb24a57950"} Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.280288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8xslw" event={"ID":"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd","Type":"ContainerStarted","Data":"1bc16530927bb7e0829eea72836645c43171cfc550e8f4e1b8dc7459dc35e882"} Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.287649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b958eb77-11cb-4049-a3db-11e838dfa0f5","Type":"ContainerStarted","Data":"3a0115b3f1f26b540658e4bb4020e3f8ee0458c6ac42b737c4c09c571f418567"} Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.292210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b649df669-vnfk4" event={"ID":"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60","Type":"ContainerStarted","Data":"8551bcf13ca4a4783331c1bc323333b1a8a20a1189c7573f2c32c6f5dd13f76c"} Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.304761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"90727286-0f2a-4930-8993-4b139e519e19","Type":"ContainerStarted","Data":"17d936383a56fa895e5d60cb71ea42ade5ad47045fad072960eb72d48296ea85"} Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.432725 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-v2gvx"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.445325 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c8484b4ff-dd8mt"] Feb 17 20:26:43 crc kubenswrapper[4793]: W0217 20:26:43.450296 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0af0e3fd_140e_472e_8438_03bfd116c17f.slice/crio-5d52d84091cb9d369dddc44aa50f72152c356bb15b896943bff87308ca254dc7 WatchSource:0}: Error finding container 5d52d84091cb9d369dddc44aa50f72152c356bb15b896943bff87308ca254dc7: Status 404 returned error can't find the container with id 5d52d84091cb9d369dddc44aa50f72152c356bb15b896943bff87308ca254dc7 Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.679841 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.713807 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59df4fdd5-c42m5"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.722967 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d49f49cc7-24rvs"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.729248 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-t9mdr"] Feb 17 20:26:43 crc kubenswrapper[4793]: W0217 20:26:43.769228 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5b5195f_6b92_4704_a57d_d308ac7e8b28.slice/crio-3c5ebb8e9c27f0951a0238dac00c939f892fec0a2fe582dbd7e238947a909a76 WatchSource:0}: Error finding container 3c5ebb8e9c27f0951a0238dac00c939f892fec0a2fe582dbd7e238947a909a76: Status 404 returned error can't find the container with id 3c5ebb8e9c27f0951a0238dac00c939f892fec0a2fe582dbd7e238947a909a76 Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.858195 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v8sgz"] Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.882618 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t8llz"] Feb 17 20:26:43 crc kubenswrapper[4793]: W0217 20:26:43.920597 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a587d11_ed16_44ac_a36e_dfd7a7ed3f6a.slice/crio-dfa13ae911e704267f4bfddaa05c1053da680f1b60d3a7b32dce3c889602d3ea WatchSource:0}: Error finding container dfa13ae911e704267f4bfddaa05c1053da680f1b60d3a7b32dce3c889602d3ea: Status 404 returned error can't find the container with id dfa13ae911e704267f4bfddaa05c1053da680f1b60d3a7b32dce3c889602d3ea Feb 17 20:26:43 crc kubenswrapper[4793]: I0217 20:26:43.966102 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:26:44 crc kubenswrapper[4793]: W0217 20:26:44.024927 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc978d1c1_b49f_4d1d_9a7a_38ce96edbb0b.slice/crio-40dacfce2bd3dba0024f4b733f808bb00a85a078909ba74149354173430afbab WatchSource:0}: Error finding container 40dacfce2bd3dba0024f4b733f808bb00a85a078909ba74149354173430afbab: Status 404 returned error can't find the container with id 40dacfce2bd3dba0024f4b733f808bb00a85a078909ba74149354173430afbab Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.081677 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.179329 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.201741 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.250397 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c8484b4ff-dd8mt"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.308277 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cf7c98b49-qm95w"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.311406 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.322204 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf7c98b49-qm95w"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.325672 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8llz" event={"ID":"0003e927-ea9c-49fb-83e9-5bfc8cd90f46","Type":"ContainerStarted","Data":"716d7284291fe21de90a0557e2d175bc5abb5810a4380df4d561abec6ed03c2a"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.334154 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.363076 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"becfe481-885b-4302-9265-d9de06b4a2c1","Type":"ContainerStarted","Data":"236208c489cf1b790baa5406347f5edf4a13e08cb1db5739211a87748c89b2fe"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.369122 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8xslw" event={"ID":"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd","Type":"ContainerStarted","Data":"cbf9b0cb7ef08745f99cc490d56cd45134d57c061ca738f3e84c09bab9bb0a81"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.390981 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9mdr" event={"ID":"95da6bd5-17d8-4402-bb8a-87b0c03feebf","Type":"ContainerStarted","Data":"88a2844af8ee5f653809a28e6f3870f34c40fb524f51c4d892095c2a5c11b5fc"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.423468 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.426950 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v2gvx" event={"ID":"0af0e3fd-140e-472e-8438-03bfd116c17f","Type":"ContainerStarted","Data":"20ada7fb8a8b5356545c3e756df75b3748efbb914add04318e2cdcde250fd82e"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.426999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v2gvx" event={"ID":"0af0e3fd-140e-472e-8438-03bfd116c17f","Type":"ContainerStarted","Data":"5d52d84091cb9d369dddc44aa50f72152c356bb15b896943bff87308ca254dc7"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.439850 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8xslw" podStartSLOduration=3.43983384 podStartE2EDuration="3.43983384s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:44.39436387 +0000 UTC m=+1079.686062181" watchObservedRunningTime="2026-02-17 20:26:44.43983384 +0000 UTC m=+1079.731532151" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.439995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b","Type":"ContainerStarted","Data":"40dacfce2bd3dba0024f4b733f808bb00a85a078909ba74149354173430afbab"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.460455 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-v2gvx" podStartSLOduration=3.460438712 podStartE2EDuration="3.460438712s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:44.457098959 +0000 UTC m=+1079.748797270" watchObservedRunningTime="2026-02-17 20:26:44.460438712 +0000 UTC m=+1079.752137023" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.474245 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerStarted","Data":"51f22845a542142bbbf7600071ab8aab1e8c4edfce4b1fbf1e91c76863760137"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.482003 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8484b4ff-dd8mt" event={"ID":"052d4a12-074f-4d37-b5d8-71cbe27550bb","Type":"ContainerStarted","Data":"adc36d29c4ae36e5b6950777076e12257d11ab1c92e1165996f6a9e455fddf11"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.485855 4793 generic.go:334] "Generic (PLEG): container finished" podID="e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" containerID="5811f6d4534b134fa2c8da813ec53c989553d60cc76d3e068767280ad3013004" exitCode=0 Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.485902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b649df669-vnfk4" event={"ID":"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60","Type":"ContainerDied","Data":"5811f6d4534b134fa2c8da813ec53c989553d60cc76d3e068767280ad3013004"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.491256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"90727286-0f2a-4930-8993-4b139e519e19","Type":"ContainerStarted","Data":"600e49373332cf618231a2c856d3bfc16abc4c6010d6b26ba77e41c049d16227"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.491298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"90727286-0f2a-4930-8993-4b139e519e19","Type":"ContainerStarted","Data":"91864bdf7ccdb24bad538fea0b70a8e388ea056e4f86c87da40c93466b9cc214"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.492205 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.498739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v8sgz" event={"ID":"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a","Type":"ContainerStarted","Data":"dfa13ae911e704267f4bfddaa05c1053da680f1b60d3a7b32dce3c889602d3ea"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.509205 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d172fd57-b658-484f-af67-0f73e9952560-horizon-secret-key\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.509511 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrzj\" (UniqueName: \"kubernetes.io/projected/d172fd57-b658-484f-af67-0f73e9952560-kube-api-access-gsrzj\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.509539 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-config-data\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.509594 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d172fd57-b658-484f-af67-0f73e9952560-logs\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.509761 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-scripts\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.527517 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d49f49cc7-24rvs" event={"ID":"c5b5195f-6b92-4704-a57d-d308ac7e8b28","Type":"ContainerStarted","Data":"3c5ebb8e9c27f0951a0238dac00c939f892fec0a2fe582dbd7e238947a909a76"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.536049 4793 generic.go:334] "Generic (PLEG): container finished" podID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerID="31f8c5f8c7cd63b162cbbb935e917e592c0b5b552c23c9ea5192fea43bd09151" exitCode=0 Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.536094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" event={"ID":"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69","Type":"ContainerDied","Data":"31f8c5f8c7cd63b162cbbb935e917e592c0b5b552c23c9ea5192fea43bd09151"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.536120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" event={"ID":"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69","Type":"ContainerStarted","Data":"33b83f8922d832d1fdcfc234047135d0d20f1be6696cae20f41abd13aff7959a"} Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.557608 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.557587066 podStartE2EDuration="3.557587066s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:44.518437413 +0000 UTC m=+1079.810135724" watchObservedRunningTime="2026-02-17 20:26:44.557587066 +0000 UTC m=+1079.849285377" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.612082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-scripts\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.612242 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d172fd57-b658-484f-af67-0f73e9952560-horizon-secret-key\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.612365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrzj\" (UniqueName: \"kubernetes.io/projected/d172fd57-b658-484f-af67-0f73e9952560-kube-api-access-gsrzj\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.612387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-config-data\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.614661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d172fd57-b658-484f-af67-0f73e9952560-logs\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.617456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-config-data\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.617816 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d172fd57-b658-484f-af67-0f73e9952560-logs\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.618889 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-scripts\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.676782 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrzj\" (UniqueName: \"kubernetes.io/projected/d172fd57-b658-484f-af67-0f73e9952560-kube-api-access-gsrzj\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.677032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d172fd57-b658-484f-af67-0f73e9952560-horizon-secret-key\") pod \"horizon-5cf7c98b49-qm95w\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.692957 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:26:44 crc kubenswrapper[4793]: I0217 20:26:44.981661 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.026077 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-config\") pod \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.026113 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-sb\") pod \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.026201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-nb\") pod \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.026265 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt892\" (UniqueName: \"kubernetes.io/projected/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-kube-api-access-pt892\") pod \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.026304 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-swift-storage-0\") pod \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.026360 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-svc\") pod \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\" (UID: \"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60\") " Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.036939 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-kube-api-access-pt892" (OuterVolumeSpecName: "kube-api-access-pt892") pod "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" (UID: "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60"). InnerVolumeSpecName "kube-api-access-pt892". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.133394 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt892\" (UniqueName: \"kubernetes.io/projected/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-kube-api-access-pt892\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.195907 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" (UID: "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.213446 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" (UID: "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.228907 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" (UID: "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.235096 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.235142 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.235158 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.237126 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" (UID: "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.252373 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-config" (OuterVolumeSpecName: "config") pod "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" (UID: "e573ed33-28a6-4a91-a8ca-e5d1c87b2f60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.336233 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.336509 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.380603 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf7c98b49-qm95w"] Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.568116 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b649df669-vnfk4" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.573833 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b649df669-vnfk4" event={"ID":"e573ed33-28a6-4a91-a8ca-e5d1c87b2f60","Type":"ContainerDied","Data":"8551bcf13ca4a4783331c1bc323333b1a8a20a1189c7573f2c32c6f5dd13f76c"} Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.573872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"becfe481-885b-4302-9265-d9de06b4a2c1","Type":"ContainerStarted","Data":"4e5191d09f2416eb8e72133fda27377cb5545471e190170c54e3f71ff18442ec"} Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.573891 4793 scope.go:117] "RemoveContainer" containerID="5811f6d4534b134fa2c8da813ec53c989553d60cc76d3e068767280ad3013004" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.581840 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api-log" containerID="cri-o://91864bdf7ccdb24bad538fea0b70a8e388ea056e4f86c87da40c93466b9cc214" gracePeriod=30 Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.582900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" event={"ID":"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69","Type":"ContainerStarted","Data":"23618709e301f46657a386728873c800ad227f618b85cd69741013e8f4837dce"} Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.582937 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.583680 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" containerID="cri-o://600e49373332cf618231a2c856d3bfc16abc4c6010d6b26ba77e41c049d16227" gracePeriod=30 Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.602107 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": EOF" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.604391 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": EOF" Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.882811 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b649df669-vnfk4"] Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.889928 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b649df669-vnfk4"] Feb 17 20:26:45 crc kubenswrapper[4793]: I0217 20:26:45.904345 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" podStartSLOduration=4.904326011 podStartE2EDuration="4.904326011s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:26:45.895110102 +0000 UTC m=+1081.186808413" watchObservedRunningTime="2026-02-17 20:26:45.904326011 +0000 UTC m=+1081.196024322" Feb 17 20:26:46 crc kubenswrapper[4793]: I0217 20:26:46.600644 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf7c98b49-qm95w" event={"ID":"d172fd57-b658-484f-af67-0f73e9952560","Type":"ContainerStarted","Data":"c616c4b7f80a06297cbaf304fef44525b132cfb688b4d8c9133be39e8c527a5c"} Feb 17 20:26:46 crc kubenswrapper[4793]: I0217 20:26:46.608650 4793 generic.go:334] "Generic (PLEG): container finished" podID="90727286-0f2a-4930-8993-4b139e519e19" containerID="91864bdf7ccdb24bad538fea0b70a8e388ea056e4f86c87da40c93466b9cc214" exitCode=143 Feb 17 20:26:46 crc kubenswrapper[4793]: I0217 20:26:46.608713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"90727286-0f2a-4930-8993-4b139e519e19","Type":"ContainerDied","Data":"91864bdf7ccdb24bad538fea0b70a8e388ea056e4f86c87da40c93466b9cc214"} Feb 17 20:26:46 crc kubenswrapper[4793]: I0217 20:26:46.610874 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b","Type":"ContainerStarted","Data":"e796ef1d0953326391a4b31a5b972750c4d7a3708a83d401aecc19347b17e540"} Feb 17 20:26:47 crc kubenswrapper[4793]: I0217 20:26:47.255149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 17 20:26:47 crc kubenswrapper[4793]: I0217 20:26:47.548120 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" path="/var/lib/kubelet/pods/e573ed33-28a6-4a91-a8ca-e5d1c87b2f60/volumes" Feb 17 20:26:48 crc kubenswrapper[4793]: I0217 20:26:48.351719 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": read tcp 10.217.0.2:59732->10.217.0.152:9322: read: connection reset by peer" Feb 17 20:26:48 crc kubenswrapper[4793]: I0217 20:26:48.630071 4793 generic.go:334] "Generic (PLEG): container finished" podID="90727286-0f2a-4930-8993-4b139e519e19" containerID="600e49373332cf618231a2c856d3bfc16abc4c6010d6b26ba77e41c049d16227" exitCode=0 Feb 17 20:26:48 crc kubenswrapper[4793]: I0217 20:26:48.630140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"90727286-0f2a-4930-8993-4b139e519e19","Type":"ContainerDied","Data":"600e49373332cf618231a2c856d3bfc16abc4c6010d6b26ba77e41c049d16227"} Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.102230 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.102670 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.759789 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d49f49cc7-24rvs"] Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.795392 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75d6fd885d-fw6ln"] Feb 17 20:26:50 crc kubenswrapper[4793]: E0217 20:26:50.796072 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" containerName="init" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.796085 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" containerName="init" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.796262 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e573ed33-28a6-4a91-a8ca-e5d1c87b2f60" containerName="init" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.797644 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.800551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.807783 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75d6fd885d-fw6ln"] Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.817894 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cf7c98b49-qm95w"] Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.872643 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75c8b5cf48-t8jmz"] Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.874070 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.880632 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75c8b5cf48-t8jmz"] Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971165 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmtm\" (UniqueName: \"kubernetes.io/projected/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-kube-api-access-gjmtm\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971261 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-combined-ca-bundle\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971631 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-config-data\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971670 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-scripts\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971726 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-tls-certs\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-logs\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:50 crc kubenswrapper[4793]: I0217 20:26:50.971830 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-secret-key\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-combined-ca-bundle\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066a6b1f-85a8-4015-9c17-a9eb27320040-logs\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073794 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-config-data\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/066a6b1f-85a8-4015-9c17-a9eb27320040-config-data\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073840 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-combined-ca-bundle\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073874 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-scripts\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073918 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066a6b1f-85a8-4015-9c17-a9eb27320040-scripts\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.073941 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-tls-certs\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.074018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-logs\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.074057 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-horizon-secret-key\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.075104 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-logs\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.075213 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-scripts\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.075437 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-config-data\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.075527 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-horizon-tls-certs\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.075557 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-secret-key\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.075615 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmtm\" (UniqueName: \"kubernetes.io/projected/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-kube-api-access-gjmtm\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.076026 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjdd\" (UniqueName: \"kubernetes.io/projected/066a6b1f-85a8-4015-9c17-a9eb27320040-kube-api-access-spjdd\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.079306 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-secret-key\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.087759 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-tls-certs\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.090776 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-combined-ca-bundle\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.095724 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmtm\" (UniqueName: \"kubernetes.io/projected/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-kube-api-access-gjmtm\") pod \"horizon-75d6fd885d-fw6ln\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.136015 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177189 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-horizon-secret-key\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177258 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-horizon-tls-certs\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjdd\" (UniqueName: \"kubernetes.io/projected/066a6b1f-85a8-4015-9c17-a9eb27320040-kube-api-access-spjdd\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177329 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066a6b1f-85a8-4015-9c17-a9eb27320040-logs\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177391 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-combined-ca-bundle\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/066a6b1f-85a8-4015-9c17-a9eb27320040-config-data\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177941 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/066a6b1f-85a8-4015-9c17-a9eb27320040-logs\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.177974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066a6b1f-85a8-4015-9c17-a9eb27320040-scripts\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.178498 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/066a6b1f-85a8-4015-9c17-a9eb27320040-scripts\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.178765 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/066a6b1f-85a8-4015-9c17-a9eb27320040-config-data\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.182332 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-horizon-secret-key\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.183587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-combined-ca-bundle\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.183873 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/066a6b1f-85a8-4015-9c17-a9eb27320040-horizon-tls-certs\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.200536 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjdd\" (UniqueName: \"kubernetes.io/projected/066a6b1f-85a8-4015-9c17-a9eb27320040-kube-api-access-spjdd\") pod \"horizon-75c8b5cf48-t8jmz\" (UID: \"066a6b1f-85a8-4015-9c17-a9eb27320040\") " pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.206550 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:26:51 crc kubenswrapper[4793]: E0217 20:26:51.276465 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.669098 4793 generic.go:334] "Generic (PLEG): container finished" podID="f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" containerID="cbf9b0cb7ef08745f99cc490d56cd45134d57c061ca738f3e84c09bab9bb0a81" exitCode=0 Feb 17 20:26:51 crc kubenswrapper[4793]: I0217 20:26:51.669109 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8xslw" event={"ID":"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd","Type":"ContainerDied","Data":"cbf9b0cb7ef08745f99cc490d56cd45134d57c061ca738f3e84c09bab9bb0a81"} Feb 17 20:26:52 crc kubenswrapper[4793]: I0217 20:26:52.256375 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": dial tcp 10.217.0.152:9322: connect: connection refused" Feb 17 20:26:52 crc kubenswrapper[4793]: I0217 20:26:52.626874 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:26:52 crc kubenswrapper[4793]: I0217 20:26:52.714218 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78895f69c7-4gh4d"] Feb 17 20:26:52 crc kubenswrapper[4793]: I0217 20:26:52.714944 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" containerID="cri-o://ccb9d33bf3a4285fa2d6f7326460bae6ced65f4fdca00d48f3a80dc8096c9e08" gracePeriod=10 Feb 17 20:26:53 crc kubenswrapper[4793]: I0217 20:26:53.694143 4793 generic.go:334] "Generic (PLEG): container finished" podID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerID="ccb9d33bf3a4285fa2d6f7326460bae6ced65f4fdca00d48f3a80dc8096c9e08" exitCode=0 Feb 17 20:26:53 crc kubenswrapper[4793]: I0217 20:26:53.694185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" event={"ID":"7f096f2c-d917-4ddc-92eb-3628f9d1cd73","Type":"ContainerDied","Data":"ccb9d33bf3a4285fa2d6f7326460bae6ced65f4fdca00d48f3a80dc8096c9e08"} Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.442809 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.557577 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.679874 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-combined-ca-bundle\") pod \"90727286-0f2a-4930-8993-4b139e519e19\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.680345 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-custom-prometheus-ca\") pod \"90727286-0f2a-4930-8993-4b139e519e19\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.680440 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-config-data\") pod \"90727286-0f2a-4930-8993-4b139e519e19\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.680589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7l64\" (UniqueName: \"kubernetes.io/projected/90727286-0f2a-4930-8993-4b139e519e19-kube-api-access-m7l64\") pod \"90727286-0f2a-4930-8993-4b139e519e19\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.680897 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90727286-0f2a-4930-8993-4b139e519e19-logs\") pod \"90727286-0f2a-4930-8993-4b139e519e19\" (UID: \"90727286-0f2a-4930-8993-4b139e519e19\") " Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.685029 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90727286-0f2a-4930-8993-4b139e519e19-logs" (OuterVolumeSpecName: "logs") pod "90727286-0f2a-4930-8993-4b139e519e19" (UID: "90727286-0f2a-4930-8993-4b139e519e19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.692745 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90727286-0f2a-4930-8993-4b139e519e19-kube-api-access-m7l64" (OuterVolumeSpecName: "kube-api-access-m7l64") pod "90727286-0f2a-4930-8993-4b139e519e19" (UID: "90727286-0f2a-4930-8993-4b139e519e19"). InnerVolumeSpecName "kube-api-access-m7l64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.750555 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90727286-0f2a-4930-8993-4b139e519e19" (UID: "90727286-0f2a-4930-8993-4b139e519e19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.774268 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "90727286-0f2a-4930-8993-4b139e519e19" (UID: "90727286-0f2a-4930-8993-4b139e519e19"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.791073 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90727286-0f2a-4930-8993-4b139e519e19-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.791101 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.791111 4793 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.791121 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7l64\" (UniqueName: \"kubernetes.io/projected/90727286-0f2a-4930-8993-4b139e519e19-kube-api-access-m7l64\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.802777 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"90727286-0f2a-4930-8993-4b139e519e19","Type":"ContainerDied","Data":"17d936383a56fa895e5d60cb71ea42ade5ad47045fad072960eb72d48296ea85"} Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.802830 4793 scope.go:117] "RemoveContainer" containerID="600e49373332cf618231a2c856d3bfc16abc4c6010d6b26ba77e41c049d16227" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.802976 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.817789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-config-data" (OuterVolumeSpecName: "config-data") pod "90727286-0f2a-4930-8993-4b139e519e19" (UID: "90727286-0f2a-4930-8993-4b139e519e19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:26:55 crc kubenswrapper[4793]: I0217 20:26:55.893270 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90727286-0f2a-4930-8993-4b139e519e19-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.145648 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.165154 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.177524 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:56 crc kubenswrapper[4793]: E0217 20:26:56.178049 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api-log" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.178074 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api-log" Feb 17 20:26:56 crc kubenswrapper[4793]: E0217 20:26:56.178106 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.178114 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.178352 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api-log" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.178377 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="90727286-0f2a-4930-8993-4b139e519e19" containerName="watcher-api" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.179589 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.184418 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.195619 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.298986 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6z7w\" (UniqueName: \"kubernetes.io/projected/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-kube-api-access-z6z7w\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.299059 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.299118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-logs\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.299156 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.299208 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-config-data\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.400771 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6z7w\" (UniqueName: \"kubernetes.io/projected/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-kube-api-access-z6z7w\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.400826 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.400869 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-logs\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.400898 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.400942 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-config-data\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.401321 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-logs\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.407122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-config-data\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.407262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.411371 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.422587 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6z7w\" (UniqueName: \"kubernetes.io/projected/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-kube-api-access-z6z7w\") pod \"watcher-api-0\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " pod="openstack/watcher-api-0" Feb 17 20:26:56 crc kubenswrapper[4793]: I0217 20:26:56.497310 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:26:57 crc kubenswrapper[4793]: I0217 20:26:57.569014 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90727286-0f2a-4930-8993-4b139e519e19" path="/var/lib/kubelet/pods/90727286-0f2a-4930-8993-4b139e519e19/volumes" Feb 17 20:27:00 crc kubenswrapper[4793]: I0217 20:27:00.432489 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Feb 17 20:27:01 crc kubenswrapper[4793]: E0217 20:27:01.539749 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.311808 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.312190 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.312375 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cfhch568h665h5b9h557h65h569h546h8h685hddh5c5h567h5cfh7h678hd7h9bhc5hcfh5d8hd6h65ch7ch8fh57bh68h55bhfh55h64cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzspp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6c8484b4ff-dd8mt_openstack(052d4a12-074f-4d37-b5d8-71cbe27550bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.325110 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6c8484b4ff-dd8mt" podUID="052d4a12-074f-4d37-b5d8-71cbe27550bb" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.365809 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.365884 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.366124 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b5h59fh5dh74h5d5h57bh8ch646hbdh597h5b7h557h685h564h67bh556h549h666hb6h59bh5ch88hddh5cdh64dh5c8hb5h689h98h67dh54dhc6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h2n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d49f49cc7-24rvs_openstack(c5b5195f-6b92-4704-a57d-d308ac7e8b28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:27:04 crc kubenswrapper[4793]: E0217 20:27:04.368448 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6d49f49cc7-24rvs" podUID="c5b5195f-6b92-4704-a57d-d308ac7e8b28" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.437314 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.520352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-config-data\") pod \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.520399 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-combined-ca-bundle\") pod \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.520422 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-credential-keys\") pod \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.520474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlxw9\" (UniqueName: \"kubernetes.io/projected/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-kube-api-access-tlxw9\") pod \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.520491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-scripts\") pod \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.520542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-fernet-keys\") pod \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\" (UID: \"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd\") " Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.548912 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" (UID: "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.548950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-kube-api-access-tlxw9" (OuterVolumeSpecName: "kube-api-access-tlxw9") pod "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" (UID: "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd"). InnerVolumeSpecName "kube-api-access-tlxw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.548955 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" (UID: "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.549021 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-scripts" (OuterVolumeSpecName: "scripts") pod "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" (UID: "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.574857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" (UID: "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.575735 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-config-data" (OuterVolumeSpecName: "config-data") pod "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" (UID: "f7aed618-93d7-4bc0-a99d-e4d03be8fbfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.621640 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.621675 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.621695 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.621704 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.621714 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.621724 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlxw9\" (UniqueName: \"kubernetes.io/projected/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd-kube-api-access-tlxw9\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.886371 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8xslw" event={"ID":"f7aed618-93d7-4bc0-a99d-e4d03be8fbfd","Type":"ContainerDied","Data":"1bc16530927bb7e0829eea72836645c43171cfc550e8f4e1b8dc7459dc35e882"} Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.886784 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc16530927bb7e0829eea72836645c43171cfc550e8f4e1b8dc7459dc35e882" Feb 17 20:27:04 crc kubenswrapper[4793]: I0217 20:27:04.886484 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8xslw" Feb 17 20:27:05 crc kubenswrapper[4793]: E0217 20:27:05.265093 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 17 20:27:05 crc kubenswrapper[4793]: E0217 20:27:05.265133 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 17 20:27:05 crc kubenswrapper[4793]: E0217 20:27:05.265240 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v274v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-v8sgz_openstack(6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:27:05 crc kubenswrapper[4793]: E0217 20:27:05.266596 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-v8sgz" podUID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.567663 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8xslw"] Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.572578 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8xslw"] Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.631335 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pwpbc"] Feb 17 20:27:05 crc kubenswrapper[4793]: E0217 20:27:05.631714 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" containerName="keystone-bootstrap" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.631731 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" containerName="keystone-bootstrap" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.631898 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" containerName="keystone-bootstrap" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.632439 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.636555 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.636720 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.636820 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92f9d" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.636914 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.637116 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.656782 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pwpbc"] Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.750766 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rspn8\" (UniqueName: \"kubernetes.io/projected/39911c8f-ceae-41a0-a891-fc8677d87ec3-kube-api-access-rspn8\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.750810 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-scripts\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.750832 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-credential-keys\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.750900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-fernet-keys\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.750917 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-combined-ca-bundle\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.750970 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-config-data\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.852534 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-fernet-keys\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.852578 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-combined-ca-bundle\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.852619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-config-data\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.852708 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspn8\" (UniqueName: \"kubernetes.io/projected/39911c8f-ceae-41a0-a891-fc8677d87ec3-kube-api-access-rspn8\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.852737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-scripts\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.852754 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-credential-keys\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.856977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-combined-ca-bundle\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.857580 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-config-data\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.857746 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-fernet-keys\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.864041 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-scripts\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.870737 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-credential-keys\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.884526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspn8\" (UniqueName: \"kubernetes.io/projected/39911c8f-ceae-41a0-a891-fc8677d87ec3-kube-api-access-rspn8\") pod \"keystone-bootstrap-pwpbc\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:05 crc kubenswrapper[4793]: E0217 20:27:05.901286 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-v8sgz" podUID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" Feb 17 20:27:05 crc kubenswrapper[4793]: I0217 20:27:05.999023 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:07 crc kubenswrapper[4793]: I0217 20:27:07.551063 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7aed618-93d7-4bc0-a99d-e4d03be8fbfd" path="/var/lib/kubelet/pods/f7aed618-93d7-4bc0-a99d-e4d03be8fbfd/volumes" Feb 17 20:27:10 crc kubenswrapper[4793]: I0217 20:27:10.433071 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Feb 17 20:27:10 crc kubenswrapper[4793]: I0217 20:27:10.434170 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:27:11 crc kubenswrapper[4793]: E0217 20:27:11.754309 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:27:11 crc kubenswrapper[4793]: I0217 20:27:11.970334 4793 generic.go:334] "Generic (PLEG): container finished" podID="0af0e3fd-140e-472e-8438-03bfd116c17f" containerID="20ada7fb8a8b5356545c3e756df75b3748efbb914add04318e2cdcde250fd82e" exitCode=0 Feb 17 20:27:11 crc kubenswrapper[4793]: I0217 20:27:11.970381 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v2gvx" event={"ID":"0af0e3fd-140e-472e-8438-03bfd116c17f","Type":"ContainerDied","Data":"20ada7fb8a8b5356545c3e756df75b3748efbb914add04318e2cdcde250fd82e"} Feb 17 20:27:14 crc kubenswrapper[4793]: E0217 20:27:14.051596 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 17 20:27:14 crc kubenswrapper[4793]: E0217 20:27:14.052046 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 17 20:27:14 crc kubenswrapper[4793]: E0217 20:27:14.052158 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb8h6bh5f4hbdh598h687h8bh58fh586hbbhf9h65fh5b6h77h6dhd8h57h698h57ch59bh69h66ch5c8h6h5ddhf8h587h8dh657h599h66fh7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsrzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5cf7c98b49-qm95w_openstack(d172fd57-b658-484f-af67-0f73e9952560): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:27:14 crc kubenswrapper[4793]: E0217 20:27:14.060256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5cf7c98b49-qm95w" podUID="d172fd57-b658-484f-af67-0f73e9952560" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.233816 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.241249 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.251937 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.262313 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329415 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5b5195f-6b92-4704-a57d-d308ac7e8b28-horizon-secret-key\") pod \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329483 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg4bc\" (UniqueName: \"kubernetes.io/projected/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-kube-api-access-qg4bc\") pod \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329514 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-combined-ca-bundle\") pod \"0af0e3fd-140e-472e-8438-03bfd116c17f\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329542 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-svc\") pod \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329607 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/052d4a12-074f-4d37-b5d8-71cbe27550bb-horizon-secret-key\") pod \"052d4a12-074f-4d37-b5d8-71cbe27550bb\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329644 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-scripts\") pod \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvb9\" (UniqueName: \"kubernetes.io/projected/0af0e3fd-140e-472e-8438-03bfd116c17f-kube-api-access-skvb9\") pod \"0af0e3fd-140e-472e-8438-03bfd116c17f\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329816 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-config\") pod \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329846 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-config-data\") pod \"052d4a12-074f-4d37-b5d8-71cbe27550bb\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329867 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-scripts\") pod \"052d4a12-074f-4d37-b5d8-71cbe27550bb\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329904 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5195f-6b92-4704-a57d-d308ac7e8b28-logs\") pod \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329932 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzspp\" (UniqueName: \"kubernetes.io/projected/052d4a12-074f-4d37-b5d8-71cbe27550bb-kube-api-access-wzspp\") pod \"052d4a12-074f-4d37-b5d8-71cbe27550bb\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.329977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052d4a12-074f-4d37-b5d8-71cbe27550bb-logs\") pod \"052d4a12-074f-4d37-b5d8-71cbe27550bb\" (UID: \"052d4a12-074f-4d37-b5d8-71cbe27550bb\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330013 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-config-data\") pod \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330036 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-nb\") pod \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330060 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h2n8\" (UniqueName: \"kubernetes.io/projected/c5b5195f-6b92-4704-a57d-d308ac7e8b28-kube-api-access-7h2n8\") pod \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\" (UID: \"c5b5195f-6b92-4704-a57d-d308ac7e8b28\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330104 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-config\") pod \"0af0e3fd-140e-472e-8438-03bfd116c17f\" (UID: \"0af0e3fd-140e-472e-8438-03bfd116c17f\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330157 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-sb\") pod \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330186 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-swift-storage-0\") pod \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\" (UID: \"7f096f2c-d917-4ddc-92eb-3628f9d1cd73\") " Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.330886 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-config-data" (OuterVolumeSpecName: "config-data") pod "052d4a12-074f-4d37-b5d8-71cbe27550bb" (UID: "052d4a12-074f-4d37-b5d8-71cbe27550bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.336402 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-config-data" (OuterVolumeSpecName: "config-data") pod "c5b5195f-6b92-4704-a57d-d308ac7e8b28" (UID: "c5b5195f-6b92-4704-a57d-d308ac7e8b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.336743 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-scripts" (OuterVolumeSpecName: "scripts") pod "052d4a12-074f-4d37-b5d8-71cbe27550bb" (UID: "052d4a12-074f-4d37-b5d8-71cbe27550bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.336923 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b5195f-6b92-4704-a57d-d308ac7e8b28-logs" (OuterVolumeSpecName: "logs") pod "c5b5195f-6b92-4704-a57d-d308ac7e8b28" (UID: "c5b5195f-6b92-4704-a57d-d308ac7e8b28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.336885 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-kube-api-access-qg4bc" (OuterVolumeSpecName: "kube-api-access-qg4bc") pod "7f096f2c-d917-4ddc-92eb-3628f9d1cd73" (UID: "7f096f2c-d917-4ddc-92eb-3628f9d1cd73"). InnerVolumeSpecName "kube-api-access-qg4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.344853 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-scripts" (OuterVolumeSpecName: "scripts") pod "c5b5195f-6b92-4704-a57d-d308ac7e8b28" (UID: "c5b5195f-6b92-4704-a57d-d308ac7e8b28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.344972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5b5195f-6b92-4704-a57d-d308ac7e8b28-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c5b5195f-6b92-4704-a57d-d308ac7e8b28" (UID: "c5b5195f-6b92-4704-a57d-d308ac7e8b28"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.345868 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d4a12-074f-4d37-b5d8-71cbe27550bb-logs" (OuterVolumeSpecName: "logs") pod "052d4a12-074f-4d37-b5d8-71cbe27550bb" (UID: "052d4a12-074f-4d37-b5d8-71cbe27550bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.351569 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d4a12-074f-4d37-b5d8-71cbe27550bb-kube-api-access-wzspp" (OuterVolumeSpecName: "kube-api-access-wzspp") pod "052d4a12-074f-4d37-b5d8-71cbe27550bb" (UID: "052d4a12-074f-4d37-b5d8-71cbe27550bb"). InnerVolumeSpecName "kube-api-access-wzspp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.351625 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b5195f-6b92-4704-a57d-d308ac7e8b28-kube-api-access-7h2n8" (OuterVolumeSpecName: "kube-api-access-7h2n8") pod "c5b5195f-6b92-4704-a57d-d308ac7e8b28" (UID: "c5b5195f-6b92-4704-a57d-d308ac7e8b28"). InnerVolumeSpecName "kube-api-access-7h2n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.352438 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052d4a12-074f-4d37-b5d8-71cbe27550bb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "052d4a12-074f-4d37-b5d8-71cbe27550bb" (UID: "052d4a12-074f-4d37-b5d8-71cbe27550bb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.352900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af0e3fd-140e-472e-8438-03bfd116c17f-kube-api-access-skvb9" (OuterVolumeSpecName: "kube-api-access-skvb9") pod "0af0e3fd-140e-472e-8438-03bfd116c17f" (UID: "0af0e3fd-140e-472e-8438-03bfd116c17f"). InnerVolumeSpecName "kube-api-access-skvb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.366498 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-config" (OuterVolumeSpecName: "config") pod "0af0e3fd-140e-472e-8438-03bfd116c17f" (UID: "0af0e3fd-140e-472e-8438-03bfd116c17f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.374346 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0af0e3fd-140e-472e-8438-03bfd116c17f" (UID: "0af0e3fd-140e-472e-8438-03bfd116c17f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.383493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f096f2c-d917-4ddc-92eb-3628f9d1cd73" (UID: "7f096f2c-d917-4ddc-92eb-3628f9d1cd73"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.400100 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f096f2c-d917-4ddc-92eb-3628f9d1cd73" (UID: "7f096f2c-d917-4ddc-92eb-3628f9d1cd73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.400895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f096f2c-d917-4ddc-92eb-3628f9d1cd73" (UID: "7f096f2c-d917-4ddc-92eb-3628f9d1cd73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.408245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f096f2c-d917-4ddc-92eb-3628f9d1cd73" (UID: "7f096f2c-d917-4ddc-92eb-3628f9d1cd73"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.418715 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-config" (OuterVolumeSpecName: "config") pod "7f096f2c-d917-4ddc-92eb-3628f9d1cd73" (UID: "7f096f2c-d917-4ddc-92eb-3628f9d1cd73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433268 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433310 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433322 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/052d4a12-074f-4d37-b5d8-71cbe27550bb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433332 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433344 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvb9\" (UniqueName: \"kubernetes.io/projected/0af0e3fd-140e-472e-8438-03bfd116c17f-kube-api-access-skvb9\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433358 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433368 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433380 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/052d4a12-074f-4d37-b5d8-71cbe27550bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433393 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5b5195f-6b92-4704-a57d-d308ac7e8b28-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433404 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzspp\" (UniqueName: \"kubernetes.io/projected/052d4a12-074f-4d37-b5d8-71cbe27550bb-kube-api-access-wzspp\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433415 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052d4a12-074f-4d37-b5d8-71cbe27550bb-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433425 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433436 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5b5195f-6b92-4704-a57d-d308ac7e8b28-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433446 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h2n8\" (UniqueName: \"kubernetes.io/projected/c5b5195f-6b92-4704-a57d-d308ac7e8b28-kube-api-access-7h2n8\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433496 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0af0e3fd-140e-472e-8438-03bfd116c17f-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433515 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433529 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433540 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5b5195f-6b92-4704-a57d-d308ac7e8b28-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:14 crc kubenswrapper[4793]: I0217 20:27:14.433757 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg4bc\" (UniqueName: \"kubernetes.io/projected/7f096f2c-d917-4ddc-92eb-3628f9d1cd73-kube-api-access-qg4bc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:14.999860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-v2gvx" event={"ID":"0af0e3fd-140e-472e-8438-03bfd116c17f","Type":"ContainerDied","Data":"5d52d84091cb9d369dddc44aa50f72152c356bb15b896943bff87308ca254dc7"} Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:14.999904 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d52d84091cb9d369dddc44aa50f72152c356bb15b896943bff87308ca254dc7" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:14.999980 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-v2gvx" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.009330 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" event={"ID":"7f096f2c-d917-4ddc-92eb-3628f9d1cd73","Type":"ContainerDied","Data":"93bfaebcf67102f187096aa7032edc244fd34f858a3a308e300a2a615aa3783f"} Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.009391 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.010574 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d49f49cc7-24rvs" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.010569 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d49f49cc7-24rvs" event={"ID":"c5b5195f-6b92-4704-a57d-d308ac7e8b28","Type":"ContainerDied","Data":"3c5ebb8e9c27f0951a0238dac00c939f892fec0a2fe582dbd7e238947a909a76"} Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.011990 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c8484b4ff-dd8mt" event={"ID":"052d4a12-074f-4d37-b5d8-71cbe27550bb","Type":"ContainerDied","Data":"adc36d29c4ae36e5b6950777076e12257d11ab1c92e1165996f6a9e455fddf11"} Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.012041 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c8484b4ff-dd8mt" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.058464 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78895f69c7-4gh4d"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.083534 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78895f69c7-4gh4d"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.100754 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d49f49cc7-24rvs"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.118644 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d49f49cc7-24rvs"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.143708 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c8484b4ff-dd8mt"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.157315 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c8484b4ff-dd8mt"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.434362 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78895f69c7-4gh4d" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.567674 4793 scope.go:117] "RemoveContainer" containerID="91864bdf7ccdb24bad538fea0b70a8e388ea056e4f86c87da40c93466b9cc214" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.586864 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052d4a12-074f-4d37-b5d8-71cbe27550bb" path="/var/lib/kubelet/pods/052d4a12-074f-4d37-b5d8-71cbe27550bb/volumes" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.587321 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" path="/var/lib/kubelet/pods/7f096f2c-d917-4ddc-92eb-3628f9d1cd73/volumes" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.594331 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b5195f-6b92-4704-a57d-d308ac7e8b28" path="/var/lib/kubelet/pods/c5b5195f-6b92-4704-a57d-d308ac7e8b28/volumes" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.594796 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb4d97cd7-bl8nc"] Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.595064 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af0e3fd-140e-472e-8438-03bfd116c17f" containerName="neutron-db-sync" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.595074 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af0e3fd-140e-472e-8438-03bfd116c17f" containerName="neutron-db-sync" Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.595085 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="init" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.595092 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="init" Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.595118 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.595125 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.595284 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af0e3fd-140e-472e-8438-03bfd116c17f" containerName="neutron-db-sync" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.595297 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f096f2c-d917-4ddc-92eb-3628f9d1cd73" containerName="dnsmasq-dns" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.597603 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.629887 4793 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.629936 4793 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.630048 4793 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v98mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-t9mdr_openstack(95da6bd5-17d8-4402-bb8a-87b0c03feebf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 20:27:15 crc kubenswrapper[4793]: E0217 20:27:15.633115 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-t9mdr" podUID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.633378 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb4d97cd7-bl8nc"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.667252 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-sb\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.667319 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-nb\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.667353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfm9\" (UniqueName: \"kubernetes.io/projected/c5eb5318-a3af-4a2a-947f-57219f371d7e-kube-api-access-tmfm9\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.667385 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-swift-storage-0\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.667434 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-svc\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.667529 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-config\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.719532 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7fdcdb7d8d-qdv8l"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.721158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.723923 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.723969 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qf6ld" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.724061 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.724208 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.742011 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fdcdb7d8d-qdv8l"] Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.769633 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfm9\" (UniqueName: \"kubernetes.io/projected/c5eb5318-a3af-4a2a-947f-57219f371d7e-kube-api-access-tmfm9\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770423 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-swift-storage-0\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770477 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-svc\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-ovndb-tls-certs\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66nmd\" (UniqueName: \"kubernetes.io/projected/b87a7c00-0efd-4456-bd50-c41a6d909aca-kube-api-access-66nmd\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-config\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770624 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-httpd-config\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-combined-ca-bundle\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770701 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-config\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-sb\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.770753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-nb\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.771525 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-swift-storage-0\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.771793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-nb\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.772567 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-config\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.773339 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-sb\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.775325 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-svc\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.776030 4793 scope.go:117] "RemoveContainer" containerID="ccb9d33bf3a4285fa2d6f7326460bae6ced65f4fdca00d48f3a80dc8096c9e08" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.795590 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfm9\" (UniqueName: \"kubernetes.io/projected/c5eb5318-a3af-4a2a-947f-57219f371d7e-kube-api-access-tmfm9\") pod \"dnsmasq-dns-cb4d97cd7-bl8nc\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.805354 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.872581 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsrzj\" (UniqueName: \"kubernetes.io/projected/d172fd57-b658-484f-af67-0f73e9952560-kube-api-access-gsrzj\") pod \"d172fd57-b658-484f-af67-0f73e9952560\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.872643 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d172fd57-b658-484f-af67-0f73e9952560-horizon-secret-key\") pod \"d172fd57-b658-484f-af67-0f73e9952560\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.872700 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-scripts\") pod \"d172fd57-b658-484f-af67-0f73e9952560\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.872895 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d172fd57-b658-484f-af67-0f73e9952560-logs\") pod \"d172fd57-b658-484f-af67-0f73e9952560\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.873114 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-config-data\") pod \"d172fd57-b658-484f-af67-0f73e9952560\" (UID: \"d172fd57-b658-484f-af67-0f73e9952560\") " Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.873671 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-ovndb-tls-certs\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.873727 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66nmd\" (UniqueName: \"kubernetes.io/projected/b87a7c00-0efd-4456-bd50-c41a6d909aca-kube-api-access-66nmd\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.873834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-httpd-config\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.873871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-combined-ca-bundle\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.873951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-config\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.875314 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-scripts" (OuterVolumeSpecName: "scripts") pod "d172fd57-b658-484f-af67-0f73e9952560" (UID: "d172fd57-b658-484f-af67-0f73e9952560"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.875645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d172fd57-b658-484f-af67-0f73e9952560-logs" (OuterVolumeSpecName: "logs") pod "d172fd57-b658-484f-af67-0f73e9952560" (UID: "d172fd57-b658-484f-af67-0f73e9952560"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.879486 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-config-data" (OuterVolumeSpecName: "config-data") pod "d172fd57-b658-484f-af67-0f73e9952560" (UID: "d172fd57-b658-484f-af67-0f73e9952560"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.887643 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d172fd57-b658-484f-af67-0f73e9952560-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d172fd57-b658-484f-af67-0f73e9952560" (UID: "d172fd57-b658-484f-af67-0f73e9952560"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.892319 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-combined-ca-bundle\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.892520 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d172fd57-b658-484f-af67-0f73e9952560-kube-api-access-gsrzj" (OuterVolumeSpecName: "kube-api-access-gsrzj") pod "d172fd57-b658-484f-af67-0f73e9952560" (UID: "d172fd57-b658-484f-af67-0f73e9952560"). InnerVolumeSpecName "kube-api-access-gsrzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.893232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-ovndb-tls-certs\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.893930 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-config\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.900022 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-httpd-config\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.903945 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66nmd\" (UniqueName: \"kubernetes.io/projected/b87a7c00-0efd-4456-bd50-c41a6d909aca-kube-api-access-66nmd\") pod \"neutron-7fdcdb7d8d-qdv8l\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.928289 4793 scope.go:117] "RemoveContainer" containerID="2609bcb55ef88bb319a5e37ef336a482455bbbf9c6cd8ff5295ce82901e7ec35" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.975623 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsrzj\" (UniqueName: \"kubernetes.io/projected/d172fd57-b658-484f-af67-0f73e9952560-kube-api-access-gsrzj\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.975656 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d172fd57-b658-484f-af67-0f73e9952560-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.975903 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.975951 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d172fd57-b658-484f-af67-0f73e9952560-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:15 crc kubenswrapper[4793]: I0217 20:27:15.975960 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d172fd57-b658-484f-af67-0f73e9952560-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.066422 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf7c98b49-qm95w" event={"ID":"d172fd57-b658-484f-af67-0f73e9952560","Type":"ContainerDied","Data":"c616c4b7f80a06297cbaf304fef44525b132cfb688b4d8c9133be39e8c527a5c"} Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.066438 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf7c98b49-qm95w" Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.069886 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:16 crc kubenswrapper[4793]: E0217 20:27:16.076374 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2609bcb55ef88bb319a5e37ef336a482455bbbf9c6cd8ff5295ce82901e7ec35\": container with ID starting with 2609bcb55ef88bb319a5e37ef336a482455bbbf9c6cd8ff5295ce82901e7ec35 not found: ID does not exist" containerID="2609bcb55ef88bb319a5e37ef336a482455bbbf9c6cd8ff5295ce82901e7ec35" Feb 17 20:27:16 crc kubenswrapper[4793]: E0217 20:27:16.110927 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-t9mdr" podUID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.135056 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.223835 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75c8b5cf48-t8jmz"] Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.316365 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cf7c98b49-qm95w"] Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.330213 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cf7c98b49-qm95w"] Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.358440 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75d6fd885d-fw6ln"] Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.370559 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.491126 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pwpbc"] Feb 17 20:27:16 crc kubenswrapper[4793]: W0217 20:27:16.494975 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c302c0d_5edb_4f33_b2ff_7f31bd9c13bf.slice/crio-fbb2842ea9bd9aeb93cce6bdb3e96c392445be3eaee8deb4710c1b852e937f3d WatchSource:0}: Error finding container fbb2842ea9bd9aeb93cce6bdb3e96c392445be3eaee8deb4710c1b852e937f3d: Status 404 returned error can't find the container with id fbb2842ea9bd9aeb93cce6bdb3e96c392445be3eaee8deb4710c1b852e937f3d Feb 17 20:27:16 crc kubenswrapper[4793]: W0217 20:27:16.495369 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fdf7fdb_aef6_4c17_a051_ba5320d07ea7.slice/crio-db226d038f0465c0680ff4ec9e60fac74395dc6068789d20125b1a5524d66bb0 WatchSource:0}: Error finding container db226d038f0465c0680ff4ec9e60fac74395dc6068789d20125b1a5524d66bb0: Status 404 returned error can't find the container with id db226d038f0465c0680ff4ec9e60fac74395dc6068789d20125b1a5524d66bb0 Feb 17 20:27:16 crc kubenswrapper[4793]: W0217 20:27:16.504070 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39911c8f_ceae_41a0_a891_fc8677d87ec3.slice/crio-4277ef7f1d4ae19ab452b57a1289ada31db43ac2f0210935c4f6454e818a5d45 WatchSource:0}: Error finding container 4277ef7f1d4ae19ab452b57a1289ada31db43ac2f0210935c4f6454e818a5d45: Status 404 returned error can't find the container with id 4277ef7f1d4ae19ab452b57a1289ada31db43ac2f0210935c4f6454e818a5d45 Feb 17 20:27:16 crc kubenswrapper[4793]: I0217 20:27:16.774279 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb4d97cd7-bl8nc"] Feb 17 20:27:16 crc kubenswrapper[4793]: W0217 20:27:16.783892 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5eb5318_a3af_4a2a_947f_57219f371d7e.slice/crio-ad48dc658b1c591f4d7f31386c24fc120fd3b399e7b2a35fb4df0186eec6ee8a WatchSource:0}: Error finding container ad48dc658b1c591f4d7f31386c24fc120fd3b399e7b2a35fb4df0186eec6ee8a: Status 404 returned error can't find the container with id ad48dc658b1c591f4d7f31386c24fc120fd3b399e7b2a35fb4df0186eec6ee8a Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.110288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8llz" event={"ID":"0003e927-ea9c-49fb-83e9-5bfc8cd90f46","Type":"ContainerStarted","Data":"dd965f0e963a528411b5ef5e6c7f2d77924a9bcb0dead9167736a1c9314e01df"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.115875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"2a5aa3180d15d7a5954b0adfcb55ef2bc476ceb2a03324d11ed3931fb408ef25"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.121044 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b958eb77-11cb-4049-a3db-11e838dfa0f5","Type":"ContainerStarted","Data":"7dd5688e00325c9b3e72f3327d1f33c54c74673f711f46166b463ceda329eb3b"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.124387 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fdcdb7d8d-qdv8l"] Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.127368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" event={"ID":"c5eb5318-a3af-4a2a-947f-57219f371d7e","Type":"ContainerStarted","Data":"ad48dc658b1c591f4d7f31386c24fc120fd3b399e7b2a35fb4df0186eec6ee8a"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.136019 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerStarted","Data":"9c17f4a250e3a88160a2598e66ca6aa415b0fb32eb6a82e88616383f38ee179e"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.137363 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t8llz" podStartSLOduration=4.308133916 podStartE2EDuration="36.137348095s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="2026-02-17 20:26:43.921345816 +0000 UTC m=+1079.213044117" lastFinishedPulling="2026-02-17 20:27:15.750559985 +0000 UTC m=+1111.042258296" observedRunningTime="2026-02-17 20:27:17.131062729 +0000 UTC m=+1112.422761060" watchObservedRunningTime="2026-02-17 20:27:17.137348095 +0000 UTC m=+1112.429046406" Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.150389 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7","Type":"ContainerStarted","Data":"db226d038f0465c0680ff4ec9e60fac74395dc6068789d20125b1a5524d66bb0"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.155123 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75d6fd885d-fw6ln" event={"ID":"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf","Type":"ContainerStarted","Data":"fbb2842ea9bd9aeb93cce6bdb3e96c392445be3eaee8deb4710c1b852e937f3d"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.159950 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=23.792098957 podStartE2EDuration="36.159925266s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="2026-02-17 20:26:43.076287597 +0000 UTC m=+1078.367985908" lastFinishedPulling="2026-02-17 20:26:55.444113906 +0000 UTC m=+1090.735812217" observedRunningTime="2026-02-17 20:27:17.153112087 +0000 UTC m=+1112.444810418" watchObservedRunningTime="2026-02-17 20:27:17.159925266 +0000 UTC m=+1112.451623577" Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.170252 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwpbc" event={"ID":"39911c8f-ceae-41a0-a891-fc8677d87ec3","Type":"ContainerStarted","Data":"b704a648e4275076bd6ca3cb60d2914d8d744c17b710e7a0cdb2d01191a2fcb7"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.170286 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwpbc" event={"ID":"39911c8f-ceae-41a0-a891-fc8677d87ec3","Type":"ContainerStarted","Data":"4277ef7f1d4ae19ab452b57a1289ada31db43ac2f0210935c4f6454e818a5d45"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.175575 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75c8b5cf48-t8jmz" event={"ID":"066a6b1f-85a8-4015-9c17-a9eb27320040","Type":"ContainerStarted","Data":"568555c24f4ed3dac126a4c6ee71cabeb936d9ae1911789bd192c4bb3f91ae8f"} Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.181795 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=23.814499674 podStartE2EDuration="36.181772479s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="2026-02-17 20:26:43.076917153 +0000 UTC m=+1078.368615464" lastFinishedPulling="2026-02-17 20:26:55.444189938 +0000 UTC m=+1090.735888269" observedRunningTime="2026-02-17 20:27:17.168468608 +0000 UTC m=+1112.460166919" watchObservedRunningTime="2026-02-17 20:27:17.181772479 +0000 UTC m=+1112.473470790" Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.551405 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d172fd57-b658-484f-af67-0f73e9952560" path="/var/lib/kubelet/pods/d172fd57-b658-484f-af67-0f73e9952560/volumes" Feb 17 20:27:17 crc kubenswrapper[4793]: I0217 20:27:17.556051 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pwpbc" podStartSLOduration=12.556037659 podStartE2EDuration="12.556037659s" podCreationTimestamp="2026-02-17 20:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:17.205603001 +0000 UTC m=+1112.497301322" watchObservedRunningTime="2026-02-17 20:27:17.556037659 +0000 UTC m=+1112.847735970" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.178405 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8445cc88bf-cdsst"] Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.179974 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.183551 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.183776 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.210316 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8445cc88bf-cdsst"] Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.244760 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-combined-ca-bundle\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.244809 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-public-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.244840 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-httpd-config\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.244866 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnbz\" (UniqueName: \"kubernetes.io/projected/24893fdf-f7bb-4be7-b5f9-edde49088bbe-kube-api-access-psnbz\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.244894 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-internal-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.244923 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-config\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.245011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-ovndb-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.264088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b","Type":"ContainerStarted","Data":"ed6b28d9b1e5b2356437341d68d927ddb9d73b4befe056962748909c24e27aa3"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.264997 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-log" containerID="cri-o://e796ef1d0953326391a4b31a5b972750c4d7a3708a83d401aecc19347b17e540" gracePeriod=30 Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.265909 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-httpd" containerID="cri-o://ed6b28d9b1e5b2356437341d68d927ddb9d73b4befe056962748909c24e27aa3" gracePeriod=30 Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.276982 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerID="80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544" exitCode=0 Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.277092 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" event={"ID":"c5eb5318-a3af-4a2a-947f-57219f371d7e","Type":"ContainerDied","Data":"80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.292906 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=36.292892779 podStartE2EDuration="36.292892779s" podCreationTimestamp="2026-02-17 20:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:18.290929341 +0000 UTC m=+1113.582627652" watchObservedRunningTime="2026-02-17 20:27:18.292892779 +0000 UTC m=+1113.584591090" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.303705 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v8sgz" event={"ID":"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a","Type":"ContainerStarted","Data":"53bfef70fd083bfacea3340a381d133e63ce4bf88d9fbf8744476a1e69af45ea"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.308876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7","Type":"ContainerStarted","Data":"b07676e7a816ba57b9e540e707a88f1a76b1770e5b8cdb635c78b527d700c311"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.308919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7","Type":"ContainerStarted","Data":"a16e7df12feca4f3c68f5ffb3e97b07e1e608fbb3fd96a30e473548a682b7f54"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.310022 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.345921 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-v8sgz" podStartSLOduration=3.680736727 podStartE2EDuration="37.345905477s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="2026-02-17 20:26:43.952375127 +0000 UTC m=+1079.244073438" lastFinishedPulling="2026-02-17 20:27:17.617543877 +0000 UTC m=+1112.909242188" observedRunningTime="2026-02-17 20:27:18.332153075 +0000 UTC m=+1113.623851386" watchObservedRunningTime="2026-02-17 20:27:18.345905477 +0000 UTC m=+1113.637603788" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350054 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnbz\" (UniqueName: \"kubernetes.io/projected/24893fdf-f7bb-4be7-b5f9-edde49088bbe-kube-api-access-psnbz\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-internal-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350195 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-config\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350369 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-ovndb-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350412 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-combined-ca-bundle\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350438 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-public-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.350479 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-httpd-config\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.353149 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"becfe481-885b-4302-9265-d9de06b4a2c1","Type":"ContainerStarted","Data":"017a62ef96d7d625c6c8ad60adccaaf5fe9e5e66126d724c9179a70f4ffb2860"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.353318 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-log" containerID="cri-o://4e5191d09f2416eb8e72133fda27377cb5545471e190170c54e3f71ff18442ec" gracePeriod=30 Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.353634 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-httpd" containerID="cri-o://017a62ef96d7d625c6c8ad60adccaaf5fe9e5e66126d724c9179a70f4ffb2860" gracePeriod=30 Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.367735 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75c8b5cf48-t8jmz" event={"ID":"066a6b1f-85a8-4015-9c17-a9eb27320040","Type":"ContainerStarted","Data":"957b527ca96e79d3de7f6301e9dbf3b29157a81f1a0c543e518285a9e6cf16f7"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.367773 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75c8b5cf48-t8jmz" event={"ID":"066a6b1f-85a8-4015-9c17-a9eb27320040","Type":"ContainerStarted","Data":"c72119f41b1a9b4d1358a93e4811d6f2bd7d9f3f470dd5a9f968da1bacf53437"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.371511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75d6fd885d-fw6ln" event={"ID":"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf","Type":"ContainerStarted","Data":"3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.371553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75d6fd885d-fw6ln" event={"ID":"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf","Type":"ContainerStarted","Data":"6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.374426 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=22.374405525 podStartE2EDuration="22.374405525s" podCreationTimestamp="2026-02-17 20:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:18.35329332 +0000 UTC m=+1113.644991651" watchObservedRunningTime="2026-02-17 20:27:18.374405525 +0000 UTC m=+1113.666103836" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.375047 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcdb7d8d-qdv8l" event={"ID":"b87a7c00-0efd-4456-bd50-c41a6d909aca","Type":"ContainerStarted","Data":"6cec9376237b5c03ebb426d0660b75d39712d41eec6bb5f1320e78a5048f2ad2"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.375258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcdb7d8d-qdv8l" event={"ID":"b87a7c00-0efd-4456-bd50-c41a6d909aca","Type":"ContainerStarted","Data":"0afeb54f9642c4dcc9db837e9576832f0d6ec4c6da7fb65b67818b8115fba470"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.375422 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcdb7d8d-qdv8l" event={"ID":"b87a7c00-0efd-4456-bd50-c41a6d909aca","Type":"ContainerStarted","Data":"d2905e783c7681f5e914abc4da3a0f2a77db3c0041fd38335e2af4773d5072ca"} Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.384428 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-internal-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.385171 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-ovndb-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.388236 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-httpd-config\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.388460 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnbz\" (UniqueName: \"kubernetes.io/projected/24893fdf-f7bb-4be7-b5f9-edde49088bbe-kube-api-access-psnbz\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.388899 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-combined-ca-bundle\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.397437 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-config\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.399024 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=36.399002226 podStartE2EDuration="36.399002226s" podCreationTimestamp="2026-02-17 20:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:18.382425014 +0000 UTC m=+1113.674123325" watchObservedRunningTime="2026-02-17 20:27:18.399002226 +0000 UTC m=+1113.690700537" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.399068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-public-tls-certs\") pod \"neutron-8445cc88bf-cdsst\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.433554 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75d6fd885d-fw6ln" podStartSLOduration=28.276904472 podStartE2EDuration="28.433532994s" podCreationTimestamp="2026-02-17 20:26:50 +0000 UTC" firstStartedPulling="2026-02-17 20:27:16.495844224 +0000 UTC m=+1111.787542525" lastFinishedPulling="2026-02-17 20:27:16.652472736 +0000 UTC m=+1111.944171047" observedRunningTime="2026-02-17 20:27:18.406860692 +0000 UTC m=+1113.698559013" watchObservedRunningTime="2026-02-17 20:27:18.433532994 +0000 UTC m=+1113.725231305" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.508103 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7fdcdb7d8d-qdv8l" podStartSLOduration=3.508081037 podStartE2EDuration="3.508081037s" podCreationTimestamp="2026-02-17 20:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:18.436452257 +0000 UTC m=+1113.728150588" watchObservedRunningTime="2026-02-17 20:27:18.508081037 +0000 UTC m=+1113.799779348" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.514592 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75c8b5cf48-t8jmz" podStartSLOduration=28.347153927 podStartE2EDuration="28.514573148s" podCreationTimestamp="2026-02-17 20:26:50 +0000 UTC" firstStartedPulling="2026-02-17 20:27:16.343061597 +0000 UTC m=+1111.634759898" lastFinishedPulling="2026-02-17 20:27:16.510480808 +0000 UTC m=+1111.802179119" observedRunningTime="2026-02-17 20:27:18.462960356 +0000 UTC m=+1113.754658687" watchObservedRunningTime="2026-02-17 20:27:18.514573148 +0000 UTC m=+1113.806271459" Feb 17 20:27:18 crc kubenswrapper[4793]: I0217 20:27:18.572200 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.390460 4793 generic.go:334] "Generic (PLEG): container finished" podID="becfe481-885b-4302-9265-d9de06b4a2c1" containerID="017a62ef96d7d625c6c8ad60adccaaf5fe9e5e66126d724c9179a70f4ffb2860" exitCode=0 Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.390756 4793 generic.go:334] "Generic (PLEG): container finished" podID="becfe481-885b-4302-9265-d9de06b4a2c1" containerID="4e5191d09f2416eb8e72133fda27377cb5545471e190170c54e3f71ff18442ec" exitCode=143 Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.390548 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"becfe481-885b-4302-9265-d9de06b4a2c1","Type":"ContainerDied","Data":"017a62ef96d7d625c6c8ad60adccaaf5fe9e5e66126d724c9179a70f4ffb2860"} Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.390804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"becfe481-885b-4302-9265-d9de06b4a2c1","Type":"ContainerDied","Data":"4e5191d09f2416eb8e72133fda27377cb5545471e190170c54e3f71ff18442ec"} Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.392994 4793 generic.go:334] "Generic (PLEG): container finished" podID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerID="ed6b28d9b1e5b2356437341d68d927ddb9d73b4befe056962748909c24e27aa3" exitCode=0 Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.393024 4793 generic.go:334] "Generic (PLEG): container finished" podID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerID="e796ef1d0953326391a4b31a5b972750c4d7a3708a83d401aecc19347b17e540" exitCode=143 Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.393064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b","Type":"ContainerDied","Data":"ed6b28d9b1e5b2356437341d68d927ddb9d73b4befe056962748909c24e27aa3"} Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.393097 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b","Type":"ContainerDied","Data":"e796ef1d0953326391a4b31a5b972750c4d7a3708a83d401aecc19347b17e540"} Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.396387 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" event={"ID":"c5eb5318-a3af-4a2a-947f-57219f371d7e","Type":"ContainerStarted","Data":"eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a"} Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.396425 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.397758 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.416394 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" podStartSLOduration=4.416378797 podStartE2EDuration="4.416378797s" podCreationTimestamp="2026-02-17 20:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:19.412644105 +0000 UTC m=+1114.704342416" watchObservedRunningTime="2026-02-17 20:27:19.416378797 +0000 UTC m=+1114.708077108" Feb 17 20:27:19 crc kubenswrapper[4793]: I0217 20:27:19.977511 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.097797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-scripts\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.097869 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.097911 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-public-tls-certs\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.097983 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpqql\" (UniqueName: \"kubernetes.io/projected/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-kube-api-access-hpqql\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.098017 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-combined-ca-bundle\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.098129 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-logs\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.098204 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-httpd-run\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.098219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-config-data\") pod \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\" (UID: \"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.100037 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-logs" (OuterVolumeSpecName: "logs") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.100259 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.103054 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.103100 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.103138 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.103782 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc36425746166cdc0b95e87a928d74d18e09bb391b29761173d2714eb0234de5"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.103829 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://cc36425746166cdc0b95e87a928d74d18e09bb391b29761173d2714eb0234de5" gracePeriod=600 Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.114115 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.114948 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-scripts" (OuterVolumeSpecName: "scripts") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.136904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-kube-api-access-hpqql" (OuterVolumeSpecName: "kube-api-access-hpqql") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "kube-api-access-hpqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.209448 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.209480 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.209489 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.209512 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.209523 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpqql\" (UniqueName: \"kubernetes.io/projected/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-kube-api-access-hpqql\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.213838 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.243015 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.270778 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-config-data" (OuterVolumeSpecName: "config-data") pod "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" (UID: "c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.300953 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.310976 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.311008 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.311017 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.311026 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.381010 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8445cc88bf-cdsst"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.425991 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.446527 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerStarted","Data":"7495d119d706da7a2adfc7cf6c4c76c9902e428d87f8b29733d405c14bc05da5"} Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.454287 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="cc36425746166cdc0b95e87a928d74d18e09bb391b29761173d2714eb0234de5" exitCode=0 Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.454394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"cc36425746166cdc0b95e87a928d74d18e09bb391b29761173d2714eb0234de5"} Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.454455 4793 scope.go:117] "RemoveContainer" containerID="8233173cc6f0085dcde5889ba083171fcabff8c918ff96d6ffa28816106b888f" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.461234 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"becfe481-885b-4302-9265-d9de06b4a2c1","Type":"ContainerDied","Data":"236208c489cf1b790baa5406347f5edf4a13e08cb1db5739211a87748c89b2fe"} Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.461375 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.479795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445cc88bf-cdsst" event={"ID":"24893fdf-f7bb-4be7-b5f9-edde49088bbe","Type":"ContainerStarted","Data":"7b69a363227f952d90e9864c71933bca593bc1f83e05b405f30a71ceffae29a1"} Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.488463 4793 scope.go:117] "RemoveContainer" containerID="017a62ef96d7d625c6c8ad60adccaaf5fe9e5e66126d724c9179a70f4ffb2860" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.504799 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.504804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b","Type":"ContainerDied","Data":"40dacfce2bd3dba0024f4b733f808bb00a85a078909ba74149354173430afbab"} Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.504894 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.539102 4793 scope.go:117] "RemoveContainer" containerID="4e5191d09f2416eb8e72133fda27377cb5545471e190170c54e3f71ff18442ec" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.565737 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.596502 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.614718 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: E0217 20:27:20.615312 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-log" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.615331 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-log" Feb 17 20:27:20 crc kubenswrapper[4793]: E0217 20:27:20.615356 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-httpd" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.615367 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-httpd" Feb 17 20:27:20 crc kubenswrapper[4793]: E0217 20:27:20.615404 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-httpd" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.615414 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-httpd" Feb 17 20:27:20 crc kubenswrapper[4793]: E0217 20:27:20.615429 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-log" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.615438 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-log" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.615744 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-httpd" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616267 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-scripts\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-config-data\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616396 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616466 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-combined-ca-bundle\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616590 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-httpd-run\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616612 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-internal-tls-certs\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616640 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-logs\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.616668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc4fg\" (UniqueName: \"kubernetes.io/projected/becfe481-885b-4302-9265-d9de06b4a2c1-kube-api-access-pc4fg\") pod \"becfe481-885b-4302-9265-d9de06b4a2c1\" (UID: \"becfe481-885b-4302-9265-d9de06b4a2c1\") " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.618017 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-logs" (OuterVolumeSpecName: "logs") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.621503 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-scripts" (OuterVolumeSpecName: "scripts") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.623802 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becfe481-885b-4302-9265-d9de06b4a2c1-kube-api-access-pc4fg" (OuterVolumeSpecName: "kube-api-access-pc4fg") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "kube-api-access-pc4fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.631473 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.631818 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.636665 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" containerName="glance-log" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.636778 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-log" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.636824 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" containerName="glance-httpd" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.637839 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.637937 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.644160 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.644376 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.654782 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.662874 4793 scope.go:117] "RemoveContainer" containerID="ed6b28d9b1e5b2356437341d68d927ddb9d73b4befe056962748909c24e27aa3" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.712795 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-config-data" (OuterVolumeSpecName: "config-data") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.714562 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "becfe481-885b-4302-9265-d9de06b4a2c1" (UID: "becfe481-885b-4302-9265-d9de06b4a2c1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbw5\" (UniqueName: \"kubernetes.io/projected/6b181bfd-3989-4ba9-80b9-9b574ef0de20-kube-api-access-jvbw5\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719622 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719642 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719697 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719724 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719740 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-scripts\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719804 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-config-data\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719901 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-logs\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719983 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.719995 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.720004 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.720013 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/becfe481-885b-4302-9265-d9de06b4a2c1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.720022 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc4fg\" (UniqueName: \"kubernetes.io/projected/becfe481-885b-4302-9265-d9de06b4a2c1-kube-api-access-pc4fg\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.720031 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.720041 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becfe481-885b-4302-9265-d9de06b4a2c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.720059 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.731195 4793 scope.go:117] "RemoveContainer" containerID="e796ef1d0953326391a4b31a5b972750c4d7a3708a83d401aecc19347b17e540" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.770823 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.821839 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822049 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822083 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822121 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822150 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822175 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-scripts\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-config-data\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-logs\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvbw5\" (UniqueName: \"kubernetes.io/projected/6b181bfd-3989-4ba9-80b9-9b574ef0de20-kube-api-access-jvbw5\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.822393 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.823093 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.825568 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.826227 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-logs\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.833016 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.836585 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-config-data\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.849100 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.850178 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvbw5\" (UniqueName: \"kubernetes.io/projected/6b181bfd-3989-4ba9-80b9-9b574ef0de20-kube-api-access-jvbw5\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.860456 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-scripts\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.880759 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.898364 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.900429 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.903233 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.903807 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.917508 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.925868 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " pod="openstack/glance-default-external-api-0" Feb 17 20:27:20 crc kubenswrapper[4793]: I0217 20:27:20.990436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027141 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-logs\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027233 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027277 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027337 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027461 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.027509 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsj2h\" (UniqueName: \"kubernetes.io/projected/74d89443-6b8f-4757-951b-9b532755c158-kube-api-access-qsj2h\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129142 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129276 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsj2h\" (UniqueName: \"kubernetes.io/projected/74d89443-6b8f-4757-951b-9b532755c158-kube-api-access-qsj2h\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129371 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-logs\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.129417 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.130452 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.130466 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.135424 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-logs\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.135571 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.136218 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.137563 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.138442 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.139154 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.139332 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.165554 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsj2h\" (UniqueName: \"kubernetes.io/projected/74d89443-6b8f-4757-951b-9b532755c158-kube-api-access-qsj2h\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.208173 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.209190 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.215406 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.225086 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.501540 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.525910 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"0b27f6badb54e605b7fa30990b1315a8e69fe8085dab295339fcc3730365d0d4"} Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.529548 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="2a5aa3180d15d7a5954b0adfcb55ef2bc476ceb2a03324d11ed3931fb408ef25" exitCode=1 Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.529599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"2a5aa3180d15d7a5954b0adfcb55ef2bc476ceb2a03324d11ed3931fb408ef25"} Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.529965 4793 scope.go:117] "RemoveContainer" containerID="2a5aa3180d15d7a5954b0adfcb55ef2bc476ceb2a03324d11ed3931fb408ef25" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.534551 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445cc88bf-cdsst" event={"ID":"24893fdf-f7bb-4be7-b5f9-edde49088bbe","Type":"ContainerStarted","Data":"76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957"} Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.538857 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.594536 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becfe481-885b-4302-9265-d9de06b4a2c1" path="/var/lib/kubelet/pods/becfe481-885b-4302-9265-d9de06b4a2c1/volumes" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.595400 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b" path="/var/lib/kubelet/pods/c978d1c1-b49f-4d1d-9a7a-38ce96edbb0b/volumes" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.736348 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:27:21 crc kubenswrapper[4793]: W0217 20:27:21.788839 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b181bfd_3989_4ba9_80b9_9b574ef0de20.slice/crio-f3211b4d13154e47a34665f25c5599cc379e9c8fbb952b3d65b072de67a4d1bf WatchSource:0}: Error finding container f3211b4d13154e47a34665f25c5599cc379e9c8fbb952b3d65b072de67a4d1bf: Status 404 returned error can't find the container with id f3211b4d13154e47a34665f25c5599cc379e9c8fbb952b3d65b072de67a4d1bf Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.952631 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.963239 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.963290 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.963302 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.963321 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:27:21 crc kubenswrapper[4793]: I0217 20:27:21.975031 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.004742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:22 crc kubenswrapper[4793]: E0217 20:27:22.088955 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.568802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445cc88bf-cdsst" event={"ID":"24893fdf-f7bb-4be7-b5f9-edde49088bbe","Type":"ContainerStarted","Data":"df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a"} Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.569211 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.571307 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74d89443-6b8f-4757-951b-9b532755c158","Type":"ContainerStarted","Data":"0fcff6565647b966caef15546aae54e9dd26f2796efbd52296731ba238e6ce3a"} Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.573645 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b181bfd-3989-4ba9-80b9-9b574ef0de20","Type":"ContainerStarted","Data":"f3211b4d13154e47a34665f25c5599cc379e9c8fbb952b3d65b072de67a4d1bf"} Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.573851 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.590251 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8445cc88bf-cdsst" podStartSLOduration=4.590231295 podStartE2EDuration="4.590231295s" podCreationTimestamp="2026-02-17 20:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:22.585186849 +0000 UTC m=+1117.876885160" watchObservedRunningTime="2026-02-17 20:27:22.590231295 +0000 UTC m=+1117.881929606" Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.630588 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.766709 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:27:22 crc kubenswrapper[4793]: I0217 20:27:22.771302 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 17 20:27:23 crc kubenswrapper[4793]: I0217 20:27:23.588795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e"} Feb 17 20:27:24 crc kubenswrapper[4793]: I0217 20:27:24.596616 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="b958eb77-11cb-4049-a3db-11e838dfa0f5" containerName="watcher-decision-engine" containerID="cri-o://7dd5688e00325c9b3e72f3327d1f33c54c74673f711f46166b463ceda329eb3b" gracePeriod=30 Feb 17 20:27:25 crc kubenswrapper[4793]: I0217 20:27:25.613068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74d89443-6b8f-4757-951b-9b532755c158","Type":"ContainerStarted","Data":"e16209bb0f333729733a08fb8e69bb0af0da54c6b75bd74aec90c55f896f4bb7"} Feb 17 20:27:25 crc kubenswrapper[4793]: I0217 20:27:25.615922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b181bfd-3989-4ba9-80b9-9b574ef0de20","Type":"ContainerStarted","Data":"ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8"} Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.072043 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.127846 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59df4fdd5-c42m5"] Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.128068 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="dnsmasq-dns" containerID="cri-o://23618709e301f46657a386728873c800ad227f618b85cd69741013e8f4837dce" gracePeriod=10 Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.499088 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.515537 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.651772 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e" exitCode=1 Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.651827 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e"} Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.651892 4793 scope.go:117] "RemoveContainer" containerID="2a5aa3180d15d7a5954b0adfcb55ef2bc476ceb2a03324d11ed3931fb408ef25" Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.652446 4793 scope.go:117] "RemoveContainer" containerID="a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e" Feb 17 20:27:26 crc kubenswrapper[4793]: E0217 20:27:26.652741 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.678188 4793 generic.go:334] "Generic (PLEG): container finished" podID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerID="23618709e301f46657a386728873c800ad227f618b85cd69741013e8f4837dce" exitCode=0 Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.678262 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" event={"ID":"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69","Type":"ContainerDied","Data":"23618709e301f46657a386728873c800ad227f618b85cd69741013e8f4837dce"} Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.681386 4793 generic.go:334] "Generic (PLEG): container finished" podID="39911c8f-ceae-41a0-a891-fc8677d87ec3" containerID="b704a648e4275076bd6ca3cb60d2914d8d744c17b710e7a0cdb2d01191a2fcb7" exitCode=0 Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.681459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwpbc" event={"ID":"39911c8f-ceae-41a0-a891-fc8677d87ec3","Type":"ContainerDied","Data":"b704a648e4275076bd6ca3cb60d2914d8d744c17b710e7a0cdb2d01191a2fcb7"} Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.686100 4793 generic.go:334] "Generic (PLEG): container finished" podID="0003e927-ea9c-49fb-83e9-5bfc8cd90f46" containerID="dd965f0e963a528411b5ef5e6c7f2d77924a9bcb0dead9167736a1c9314e01df" exitCode=0 Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.686180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8llz" event={"ID":"0003e927-ea9c-49fb-83e9-5bfc8cd90f46","Type":"ContainerDied","Data":"dd965f0e963a528411b5ef5e6c7f2d77924a9bcb0dead9167736a1c9314e01df"} Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.690187 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 17 20:27:26 crc kubenswrapper[4793]: I0217 20:27:26.965777 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:27:27 crc kubenswrapper[4793]: I0217 20:27:27.715613 4793 generic.go:334] "Generic (PLEG): container finished" podID="b958eb77-11cb-4049-a3db-11e838dfa0f5" containerID="7dd5688e00325c9b3e72f3327d1f33c54c74673f711f46166b463ceda329eb3b" exitCode=1 Feb 17 20:27:27 crc kubenswrapper[4793]: I0217 20:27:27.715751 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b958eb77-11cb-4049-a3db-11e838dfa0f5","Type":"ContainerDied","Data":"7dd5688e00325c9b3e72f3327d1f33c54c74673f711f46166b463ceda329eb3b"} Feb 17 20:27:27 crc kubenswrapper[4793]: I0217 20:27:27.716481 4793 scope.go:117] "RemoveContainer" containerID="a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e" Feb 17 20:27:27 crc kubenswrapper[4793]: E0217 20:27:27.716677 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:27:28 crc kubenswrapper[4793]: I0217 20:27:28.730426 4793 generic.go:334] "Generic (PLEG): container finished" podID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" containerID="53bfef70fd083bfacea3340a381d133e63ce4bf88d9fbf8744476a1e69af45ea" exitCode=0 Feb 17 20:27:28 crc kubenswrapper[4793]: I0217 20:27:28.730499 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v8sgz" event={"ID":"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a","Type":"ContainerDied","Data":"53bfef70fd083bfacea3340a381d133e63ce4bf88d9fbf8744476a1e69af45ea"} Feb 17 20:27:29 crc kubenswrapper[4793]: I0217 20:27:29.197844 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:29 crc kubenswrapper[4793]: I0217 20:27:29.198075 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api-log" containerID="cri-o://a16e7df12feca4f3c68f5ffb3e97b07e1e608fbb3fd96a30e473548a682b7f54" gracePeriod=30 Feb 17 20:27:29 crc kubenswrapper[4793]: I0217 20:27:29.198108 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api" containerID="cri-o://b07676e7a816ba57b9e540e707a88f1a76b1770e5b8cdb635c78b527d700c311" gracePeriod=30 Feb 17 20:27:29 crc kubenswrapper[4793]: I0217 20:27:29.746220 4793 generic.go:334] "Generic (PLEG): container finished" podID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerID="a16e7df12feca4f3c68f5ffb3e97b07e1e608fbb3fd96a30e473548a682b7f54" exitCode=143 Feb 17 20:27:29 crc kubenswrapper[4793]: I0217 20:27:29.746306 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7","Type":"ContainerDied","Data":"a16e7df12feca4f3c68f5ffb3e97b07e1e608fbb3fd96a30e473548a682b7f54"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.075443 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8llz" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.110003 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.111130 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.198928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-svc\") pod \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.198995 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-combined-ca-bundle\") pod \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199026 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-config-data\") pod \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199088 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-swift-storage-0\") pod \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199119 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-credential-keys\") pod \"39911c8f-ceae-41a0-a891-fc8677d87ec3\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199143 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-scripts\") pod \"39911c8f-ceae-41a0-a891-fc8677d87ec3\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199167 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7hpm\" (UniqueName: \"kubernetes.io/projected/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-kube-api-access-q7hpm\") pod \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199232 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-logs\") pod \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199305 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-scripts\") pod \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\" (UID: \"0003e927-ea9c-49fb-83e9-5bfc8cd90f46\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-config\") pod \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199400 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-nb\") pod \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199433 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rspn8\" (UniqueName: \"kubernetes.io/projected/39911c8f-ceae-41a0-a891-fc8677d87ec3-kube-api-access-rspn8\") pod \"39911c8f-ceae-41a0-a891-fc8677d87ec3\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199459 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-fernet-keys\") pod \"39911c8f-ceae-41a0-a891-fc8677d87ec3\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199504 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg8t5\" (UniqueName: \"kubernetes.io/projected/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-kube-api-access-lg8t5\") pod \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199520 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-combined-ca-bundle\") pod \"39911c8f-ceae-41a0-a891-fc8677d87ec3\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199573 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-config-data\") pod \"39911c8f-ceae-41a0-a891-fc8677d87ec3\" (UID: \"39911c8f-ceae-41a0-a891-fc8677d87ec3\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.199604 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-sb\") pod \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\" (UID: \"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.234617 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-logs" (OuterVolumeSpecName: "logs") pod "0003e927-ea9c-49fb-83e9-5bfc8cd90f46" (UID: "0003e927-ea9c-49fb-83e9-5bfc8cd90f46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.265872 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "39911c8f-ceae-41a0-a891-fc8677d87ec3" (UID: "39911c8f-ceae-41a0-a891-fc8677d87ec3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.278982 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39911c8f-ceae-41a0-a891-fc8677d87ec3-kube-api-access-rspn8" (OuterVolumeSpecName: "kube-api-access-rspn8") pod "39911c8f-ceae-41a0-a891-fc8677d87ec3" (UID: "39911c8f-ceae-41a0-a891-fc8677d87ec3"). InnerVolumeSpecName "kube-api-access-rspn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.293037 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39911c8f-ceae-41a0-a891-fc8677d87ec3" (UID: "39911c8f-ceae-41a0-a891-fc8677d87ec3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.293942 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-kube-api-access-q7hpm" (OuterVolumeSpecName: "kube-api-access-q7hpm") pod "0003e927-ea9c-49fb-83e9-5bfc8cd90f46" (UID: "0003e927-ea9c-49fb-83e9-5bfc8cd90f46"). InnerVolumeSpecName "kube-api-access-q7hpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.294020 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-scripts" (OuterVolumeSpecName: "scripts") pod "39911c8f-ceae-41a0-a891-fc8677d87ec3" (UID: "39911c8f-ceae-41a0-a891-fc8677d87ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.306079 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.306104 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rspn8\" (UniqueName: \"kubernetes.io/projected/39911c8f-ceae-41a0-a891-fc8677d87ec3-kube-api-access-rspn8\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.306114 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.306130 4793 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.306138 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.306146 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7hpm\" (UniqueName: \"kubernetes.io/projected/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-kube-api-access-q7hpm\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.342792 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-scripts" (OuterVolumeSpecName: "scripts") pod "0003e927-ea9c-49fb-83e9-5bfc8cd90f46" (UID: "0003e927-ea9c-49fb-83e9-5bfc8cd90f46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.346866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-kube-api-access-lg8t5" (OuterVolumeSpecName: "kube-api-access-lg8t5") pod "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" (UID: "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69"). InnerVolumeSpecName "kube-api-access-lg8t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.347516 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.348645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-config-data" (OuterVolumeSpecName: "config-data") pod "0003e927-ea9c-49fb-83e9-5bfc8cd90f46" (UID: "0003e927-ea9c-49fb-83e9-5bfc8cd90f46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.379900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0003e927-ea9c-49fb-83e9-5bfc8cd90f46" (UID: "0003e927-ea9c-49fb-83e9-5bfc8cd90f46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.407642 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-combined-ca-bundle\") pod \"b958eb77-11cb-4049-a3db-11e838dfa0f5\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.408327 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnzth\" (UniqueName: \"kubernetes.io/projected/b958eb77-11cb-4049-a3db-11e838dfa0f5-kube-api-access-gnzth\") pod \"b958eb77-11cb-4049-a3db-11e838dfa0f5\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.408365 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-config-data\") pod \"b958eb77-11cb-4049-a3db-11e838dfa0f5\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.408423 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-custom-prometheus-ca\") pod \"b958eb77-11cb-4049-a3db-11e838dfa0f5\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.408565 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b958eb77-11cb-4049-a3db-11e838dfa0f5-logs\") pod \"b958eb77-11cb-4049-a3db-11e838dfa0f5\" (UID: \"b958eb77-11cb-4049-a3db-11e838dfa0f5\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.409343 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.409362 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg8t5\" (UniqueName: \"kubernetes.io/projected/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-kube-api-access-lg8t5\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.409373 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.409381 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003e927-ea9c-49fb-83e9-5bfc8cd90f46-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.409681 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39911c8f-ceae-41a0-a891-fc8677d87ec3" (UID: "39911c8f-ceae-41a0-a891-fc8677d87ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.410045 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b958eb77-11cb-4049-a3db-11e838dfa0f5-logs" (OuterVolumeSpecName: "logs") pod "b958eb77-11cb-4049-a3db-11e838dfa0f5" (UID: "b958eb77-11cb-4049-a3db-11e838dfa0f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.410467 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" (UID: "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.425821 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-config-data" (OuterVolumeSpecName: "config-data") pod "39911c8f-ceae-41a0-a891-fc8677d87ec3" (UID: "39911c8f-ceae-41a0-a891-fc8677d87ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.429761 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-config" (OuterVolumeSpecName: "config") pod "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" (UID: "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.433114 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" (UID: "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.440045 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b958eb77-11cb-4049-a3db-11e838dfa0f5-kube-api-access-gnzth" (OuterVolumeSpecName: "kube-api-access-gnzth") pod "b958eb77-11cb-4049-a3db-11e838dfa0f5" (UID: "b958eb77-11cb-4049-a3db-11e838dfa0f5"). InnerVolumeSpecName "kube-api-access-gnzth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.455948 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" (UID: "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.455801 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" (UID: "0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.466329 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b958eb77-11cb-4049-a3db-11e838dfa0f5" (UID: "b958eb77-11cb-4049-a3db-11e838dfa0f5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.479563 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.485174 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-config-data" (OuterVolumeSpecName: "config-data") pod "b958eb77-11cb-4049-a3db-11e838dfa0f5" (UID: "b958eb77-11cb-4049-a3db-11e838dfa0f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.492915 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b958eb77-11cb-4049-a3db-11e838dfa0f5" (UID: "b958eb77-11cb-4049-a3db-11e838dfa0f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512367 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnzth\" (UniqueName: \"kubernetes.io/projected/b958eb77-11cb-4049-a3db-11e838dfa0f5-kube-api-access-gnzth\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512395 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512404 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512413 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512421 4793 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512431 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512439 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39911c8f-ceae-41a0-a891-fc8677d87ec3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512447 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512455 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512463 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b958eb77-11cb-4049-a3db-11e838dfa0f5-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512470 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.512477 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958eb77-11cb-4049-a3db-11e838dfa0f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.613617 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-combined-ca-bundle\") pod \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.613673 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v274v\" (UniqueName: \"kubernetes.io/projected/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-kube-api-access-v274v\") pod \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.614391 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-db-sync-config-data\") pod \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\" (UID: \"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a\") " Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.618444 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" (UID: "6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.623068 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-kube-api-access-v274v" (OuterVolumeSpecName: "kube-api-access-v274v") pod "6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" (UID: "6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a"). InnerVolumeSpecName "kube-api-access-v274v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.660807 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" (UID: "6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.721189 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.721217 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v274v\" (UniqueName: \"kubernetes.io/projected/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-kube-api-access-v274v\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.721230 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.769427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b958eb77-11cb-4049-a3db-11e838dfa0f5","Type":"ContainerDied","Data":"3a0115b3f1f26b540658e4bb4020e3f8ee0458c6ac42b737c4c09c571f418567"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.769677 4793 scope.go:117] "RemoveContainer" containerID="7dd5688e00325c9b3e72f3327d1f33c54c74673f711f46166b463ceda329eb3b" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.769811 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.796475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerStarted","Data":"b6f01aab37c47f6d916d53fd22cf995d7a13582bffa3b3be3884a0cae6703a5b"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.797800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwpbc" event={"ID":"39911c8f-ceae-41a0-a891-fc8677d87ec3","Type":"ContainerDied","Data":"4277ef7f1d4ae19ab452b57a1289ada31db43ac2f0210935c4f6454e818a5d45"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.797824 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4277ef7f1d4ae19ab452b57a1289ada31db43ac2f0210935c4f6454e818a5d45" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.797868 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwpbc" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.828834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v8sgz" event={"ID":"6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a","Type":"ContainerDied","Data":"dfa13ae911e704267f4bfddaa05c1053da680f1b60d3a7b32dce3c889602d3ea"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.828868 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfa13ae911e704267f4bfddaa05c1053da680f1b60d3a7b32dce3c889602d3ea" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.828928 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v8sgz" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.839446 4793 generic.go:334] "Generic (PLEG): container finished" podID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerID="b07676e7a816ba57b9e540e707a88f1a76b1770e5b8cdb635c78b527d700c311" exitCode=0 Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.839502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7","Type":"ContainerDied","Data":"b07676e7a816ba57b9e540e707a88f1a76b1770e5b8cdb635c78b527d700c311"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.842021 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t8llz" event={"ID":"0003e927-ea9c-49fb-83e9-5bfc8cd90f46","Type":"ContainerDied","Data":"716d7284291fe21de90a0557e2d175bc5abb5810a4380df4d561abec6ed03c2a"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.842051 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716d7284291fe21de90a0557e2d175bc5abb5810a4380df4d561abec6ed03c2a" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.842105 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t8llz" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.855030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" event={"ID":"0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69","Type":"ContainerDied","Data":"33b83f8922d832d1fdcfc234047135d0d20f1be6696cae20f41abd13aff7959a"} Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.855118 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961010 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d6f85865c-9pvnx"] Feb 17 20:27:30 crc kubenswrapper[4793]: E0217 20:27:30.961408 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0003e927-ea9c-49fb-83e9-5bfc8cd90f46" containerName="placement-db-sync" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961419 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0003e927-ea9c-49fb-83e9-5bfc8cd90f46" containerName="placement-db-sync" Feb 17 20:27:30 crc kubenswrapper[4793]: E0217 20:27:30.961430 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39911c8f-ceae-41a0-a891-fc8677d87ec3" containerName="keystone-bootstrap" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961437 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="39911c8f-ceae-41a0-a891-fc8677d87ec3" containerName="keystone-bootstrap" Feb 17 20:27:30 crc kubenswrapper[4793]: E0217 20:27:30.961449 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="dnsmasq-dns" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961456 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="dnsmasq-dns" Feb 17 20:27:30 crc kubenswrapper[4793]: E0217 20:27:30.961473 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" containerName="barbican-db-sync" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961478 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" containerName="barbican-db-sync" Feb 17 20:27:30 crc kubenswrapper[4793]: E0217 20:27:30.961490 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="init" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961496 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="init" Feb 17 20:27:30 crc kubenswrapper[4793]: E0217 20:27:30.961503 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b958eb77-11cb-4049-a3db-11e838dfa0f5" containerName="watcher-decision-engine" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961508 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b958eb77-11cb-4049-a3db-11e838dfa0f5" containerName="watcher-decision-engine" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961654 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b958eb77-11cb-4049-a3db-11e838dfa0f5" containerName="watcher-decision-engine" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961671 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0003e927-ea9c-49fb-83e9-5bfc8cd90f46" containerName="placement-db-sync" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961682 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="39911c8f-ceae-41a0-a891-fc8677d87ec3" containerName="keystone-bootstrap" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961705 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="dnsmasq-dns" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.961717 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" containerName="barbican-db-sync" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.964864 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.967533 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.967674 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 20:27:30 crc kubenswrapper[4793]: I0217 20:27:30.969084 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-64fgn" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.006856 4793 scope.go:117] "RemoveContainer" containerID="23618709e301f46657a386728873c800ad227f618b85cd69741013e8f4837dce" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.025644 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6f85865c-9pvnx"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.040639 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-logs\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.040888 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data-custom\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.041206 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xct\" (UniqueName: \"kubernetes.io/projected/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-kube-api-access-m7xct\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.041290 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-combined-ca-bundle\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.041366 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.051091 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.080105 4793 scope.go:117] "RemoveContainer" containerID="31f8c5f8c7cd63b162cbbb935e917e592c0b5b552c23c9ea5192fea43bd09151" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.084987 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8559cccd8-9zlg9"] Feb 17 20:27:31 crc kubenswrapper[4793]: E0217 20:27:31.085592 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.085605 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api" Feb 17 20:27:31 crc kubenswrapper[4793]: E0217 20:27:31.085619 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api-log" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.085625 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api-log" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.085826 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api-log" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.085843 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" containerName="watcher-api" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.086740 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.088916 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.129225 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8559cccd8-9zlg9"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.139286 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145090 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-combined-ca-bundle\") pod \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145150 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6z7w\" (UniqueName: \"kubernetes.io/projected/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-kube-api-access-z6z7w\") pod \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145223 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-config-data\") pod \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145354 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-custom-prometheus-ca\") pod \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145388 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-logs\") pod \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\" (UID: \"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7\") " Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145750 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-combined-ca-bundle\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xct\" (UniqueName: \"kubernetes.io/projected/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-kube-api-access-m7xct\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145816 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-combined-ca-bundle\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145838 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crr87\" (UniqueName: \"kubernetes.io/projected/5e6703f9-1619-4eb6-9a19-0e82153f6979-kube-api-access-crr87\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.145975 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-logs\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.146001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6703f9-1619-4eb6-9a19-0e82153f6979-logs\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.146029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data-custom\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.146057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data-custom\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.152404 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-kube-api-access-z6z7w" (OuterVolumeSpecName: "kube-api-access-z6z7w") pod "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" (UID: "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7"). InnerVolumeSpecName "kube-api-access-z6z7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.159529 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-logs\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.163126 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-logs" (OuterVolumeSpecName: "logs") pod "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" (UID: "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.169520 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.172324 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data-custom\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.177244 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xct\" (UniqueName: \"kubernetes.io/projected/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-kube-api-access-m7xct\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.179204 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-combined-ca-bundle\") pod \"barbican-worker-d6f85865c-9pvnx\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.213138 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fddbf755c-j4r2b"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.214926 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.216052 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75c8b5cf48-t8jmz" podUID="066a6b1f-85a8-4015-9c17-a9eb27320040" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.248480 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fddbf755c-j4r2b"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-combined-ca-bundle\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250584 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crr87\" (UniqueName: \"kubernetes.io/projected/5e6703f9-1619-4eb6-9a19-0e82153f6979-kube-api-access-crr87\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250666 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6703f9-1619-4eb6-9a19-0e82153f6979-logs\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data-custom\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250786 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6z7w\" (UniqueName: \"kubernetes.io/projected/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-kube-api-access-z6z7w\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.250797 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.254096 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data-custom\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.256743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6703f9-1619-4eb6-9a19-0e82153f6979-logs\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.285463 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.286344 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59df4fdd5-c42m5"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.305488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crr87\" (UniqueName: \"kubernetes.io/projected/5e6703f9-1619-4eb6-9a19-0e82153f6979-kube-api-access-crr87\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.316490 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-combined-ca-bundle\") pod \"barbican-keystone-listener-8559cccd8-9zlg9\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.338500 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59df4fdd5-c42m5"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.352499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-nb\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.361319 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-svc\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.361674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-config\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.361823 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-sb\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.361975 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-swift-storage-0\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.362100 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zv8\" (UniqueName: \"kubernetes.io/projected/c06f6156-bc74-4ece-88f4-09626b053748-kube-api-access-r7zv8\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.379184 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" (UID: "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.385275 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.395467 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6764d4c5d4-r2vt4"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.397630 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.401073 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.406050 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.425451 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6764d4c5d4-r2vt4"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.436955 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.438367 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.439133 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.444347 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.445240 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" (UID: "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.459257 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.465503 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.467483 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-config\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.466680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-config\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.467579 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-sb\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.467607 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pdz\" (UniqueName: \"kubernetes.io/projected/c6290a12-a807-46af-adaa-bafaf7bbb26f-kube-api-access-t9pdz\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.468163 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-sb\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.468211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-swift-storage-0\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.468272 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zv8\" (UniqueName: \"kubernetes.io/projected/c06f6156-bc74-4ece-88f4-09626b053748-kube-api-access-r7zv8\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.468814 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-swift-storage-0\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.468862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data-custom\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.469184 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-combined-ca-bundle\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.469247 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-nb\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.469268 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-svc\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.470364 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.470382 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6290a12-a807-46af-adaa-bafaf7bbb26f-logs\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.469793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-nb\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.470315 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-svc\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.470519 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.470529 4793 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.491213 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79779c64b9-54jgr"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.493373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.502269 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79779c64b9-54jgr"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.507017 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.507544 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92f9d" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.507719 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.507942 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.508158 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.508351 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.512064 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6754ff86f4-gttvc"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.513922 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zv8\" (UniqueName: \"kubernetes.io/projected/c06f6156-bc74-4ece-88f4-09626b053748-kube-api-access-r7zv8\") pod \"dnsmasq-dns-7fddbf755c-j4r2b\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.522933 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.536715 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.536853 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.536946 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.537194 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.539707 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hbw8s" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.554833 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-config-data" (OuterVolumeSpecName: "config-data") pod "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" (UID: "5fdf7fdb-aef6-4c17-a051-ba5320d07ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.556956 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.557125 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" path="/var/lib/kubelet/pods/0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69/volumes" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.557773 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b958eb77-11cb-4049-a3db-11e838dfa0f5" path="/var/lib/kubelet/pods/b958eb77-11cb-4049-a3db-11e838dfa0f5/volumes" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.558403 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6754ff86f4-gttvc"] Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573178 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6290a12-a807-46af-adaa-bafaf7bbb26f-logs\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573280 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-config-data\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573303 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573324 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-fernet-keys\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573348 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-scripts\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573406 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573439 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pdz\" (UniqueName: \"kubernetes.io/projected/c6290a12-a807-46af-adaa-bafaf7bbb26f-kube-api-access-t9pdz\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-logs\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-credential-keys\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573516 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvgk\" (UniqueName: \"kubernetes.io/projected/a630279e-31d3-4ab4-88f2-a06edcb58dee-kube-api-access-jpvgk\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573536 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-public-tls-certs\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data-custom\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27984\" (UniqueName: \"kubernetes.io/projected/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-kube-api-access-27984\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-combined-ca-bundle\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-combined-ca-bundle\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573727 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-internal-tls-certs\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.573793 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.576319 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6290a12-a807-46af-adaa-bafaf7bbb26f-logs\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.581160 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-combined-ca-bundle\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.581438 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data-custom\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.590994 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.592914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pdz\" (UniqueName: \"kubernetes.io/projected/c6290a12-a807-46af-adaa-bafaf7bbb26f-kube-api-access-t9pdz\") pod \"barbican-api-6764d4c5d4-r2vt4\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686143 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66324f49-be26-4cca-a237-cf6a31ab771f-logs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27984\" (UniqueName: \"kubernetes.io/projected/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-kube-api-access-27984\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-combined-ca-bundle\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686625 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686649 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrw2\" (UniqueName: \"kubernetes.io/projected/66324f49-be26-4cca-a237-cf6a31ab771f-kube-api-access-znrw2\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-internal-tls-certs\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-public-tls-certs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686961 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.686977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-config-data\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687002 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-fernet-keys\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-combined-ca-bundle\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687054 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-internal-tls-certs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-scripts\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687122 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687139 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-scripts\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687166 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-logs\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687209 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-credential-keys\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687226 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-config-data\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687295 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvgk\" (UniqueName: \"kubernetes.io/projected/a630279e-31d3-4ab4-88f2-a06edcb58dee-kube-api-access-jpvgk\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.687317 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-public-tls-certs\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.690999 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-public-tls-certs\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.693673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-config-data\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.698745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-fernet-keys\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.703305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-combined-ca-bundle\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.704842 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-logs\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.706772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-scripts\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.707950 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.712119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27984\" (UniqueName: \"kubernetes.io/projected/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-kube-api-access-27984\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.713019 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.713364 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.716080 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-internal-tls-certs\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.720372 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a630279e-31d3-4ab4-88f2-a06edcb58dee-credential-keys\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.723651 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvgk\" (UniqueName: \"kubernetes.io/projected/a630279e-31d3-4ab4-88f2-a06edcb58dee-kube-api-access-jpvgk\") pod \"keystone-79779c64b9-54jgr\" (UID: \"a630279e-31d3-4ab4-88f2-a06edcb58dee\") " pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796650 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-config-data\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796736 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66324f49-be26-4cca-a237-cf6a31ab771f-logs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796810 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrw2\" (UniqueName: \"kubernetes.io/projected/66324f49-be26-4cca-a237-cf6a31ab771f-kube-api-access-znrw2\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796835 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-public-tls-certs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-combined-ca-bundle\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796887 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-internal-tls-certs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.796914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-scripts\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.809186 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66324f49-be26-4cca-a237-cf6a31ab771f-logs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.820072 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-config-data\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.820298 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-scripts\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.822158 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-public-tls-certs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.828404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-internal-tls-certs\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.836126 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-combined-ca-bundle\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.885660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrw2\" (UniqueName: \"kubernetes.io/projected/66324f49-be26-4cca-a237-cf6a31ab771f-kube-api-access-znrw2\") pod \"placement-6754ff86f4-gttvc\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.886089 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.897492 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74d89443-6b8f-4757-951b-9b532755c158","Type":"ContainerStarted","Data":"f8b6d9a85408906cc38d7c40fd39e029752f54465e1b976639f5aa4f8261f34e"} Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.899967 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.900901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b181bfd-3989-4ba9-80b9-9b574ef0de20","Type":"ContainerStarted","Data":"b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01"} Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.914247 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.917231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5fdf7fdb-aef6-4c17-a051-ba5320d07ea7","Type":"ContainerDied","Data":"db226d038f0465c0680ff4ec9e60fac74395dc6068789d20125b1a5524d66bb0"} Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.917283 4793 scope.go:117] "RemoveContainer" containerID="b07676e7a816ba57b9e540e707a88f1a76b1770e5b8cdb635c78b527d700c311" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.917404 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.926108 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.962468 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.962790 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.962802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.963465 4793 scope.go:117] "RemoveContainer" containerID="a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e" Feb 17 20:27:31 crc kubenswrapper[4793]: E0217 20:27:31.963675 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:27:31 crc kubenswrapper[4793]: I0217 20:27:31.998957 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.998935402 podStartE2EDuration="11.998935402s" podCreationTimestamp="2026-02-17 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:31.964151608 +0000 UTC m=+1127.255849919" watchObservedRunningTime="2026-02-17 20:27:31.998935402 +0000 UTC m=+1127.290633703" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.054367 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.054349739 podStartE2EDuration="12.054349739s" podCreationTimestamp="2026-02-17 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:32.025380429 +0000 UTC m=+1127.317078740" watchObservedRunningTime="2026-02-17 20:27:32.054349739 +0000 UTC m=+1127.346048050" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.083840 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.122758 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.151763 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.153783 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.158925 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.159221 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.159335 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.172910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.220491 4793 scope.go:117] "RemoveContainer" containerID="a16e7df12feca4f3c68f5ffb3e97b07e1e608fbb3fd96a30e473548a682b7f54" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c55328-b65e-4471-b5a4-228ae3dbeb8d-logs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230315 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-config-data\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230408 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230437 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sgds\" (UniqueName: \"kubernetes.io/projected/71c55328-b65e-4471-b5a4-228ae3dbeb8d-kube-api-access-5sgds\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.230587 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.332482 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.332858 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c55328-b65e-4471-b5a4-228ae3dbeb8d-logs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.332904 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-config-data\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.332939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.332959 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sgds\" (UniqueName: \"kubernetes.io/projected/71c55328-b65e-4471-b5a4-228ae3dbeb8d-kube-api-access-5sgds\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.332992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.333011 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.334080 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d6f85865c-9pvnx"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.335415 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71c55328-b65e-4471-b5a4-228ae3dbeb8d-logs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.343424 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.361352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.362032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-config-data\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.363806 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.367152 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sgds\" (UniqueName: \"kubernetes.io/projected/71c55328-b65e-4471-b5a4-228ae3dbeb8d-kube-api-access-5sgds\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.373252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71c55328-b65e-4471-b5a4-228ae3dbeb8d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"71c55328-b65e-4471-b5a4-228ae3dbeb8d\") " pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.487650 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-876d8fd55-vqq67"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.490109 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.487779 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.536745 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5994c4b6d-gdq4r"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.539103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.540771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-config-data-custom\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.540818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pvs\" (UniqueName: \"kubernetes.io/projected/8b6515a5-0b19-48c6-8dce-c3765bbe9087-kube-api-access-q4pvs\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.541031 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6515a5-0b19-48c6-8dce-c3765bbe9087-logs\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.541270 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-config-data\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.541293 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-combined-ca-bundle\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.563128 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-876d8fd55-vqq67"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.593239 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5994c4b6d-gdq4r"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.621238 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8559cccd8-9zlg9"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.627289 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59df4fdd5-c42m5" podUID="0ec34df0-ddeb-42c9-a8fd-cfd6de4faf69" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.643151 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dbff57d7b-c7kmz"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645435 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6515a5-0b19-48c6-8dce-c3765bbe9087-logs\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645656 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-config-data\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwfs\" (UniqueName: \"kubernetes.io/projected/c3f63aca-a302-4b5a-9aab-df2030cb30a0-kube-api-access-9nwfs\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645724 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-config-data\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645744 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-combined-ca-bundle\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645878 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-config-data-custom\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645900 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-config-data-custom\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645918 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pvs\" (UniqueName: \"kubernetes.io/projected/8b6515a5-0b19-48c6-8dce-c3765bbe9087-kube-api-access-q4pvs\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645965 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f63aca-a302-4b5a-9aab-df2030cb30a0-logs\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.645988 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-combined-ca-bundle\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.646019 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.647734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6515a5-0b19-48c6-8dce-c3765bbe9087-logs\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.658440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-config-data-custom\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.659820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-config-data\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.665003 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b6515a5-0b19-48c6-8dce-c3765bbe9087-combined-ca-bundle\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.667646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dbff57d7b-c7kmz"] Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.674847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pvs\" (UniqueName: \"kubernetes.io/projected/8b6515a5-0b19-48c6-8dce-c3765bbe9087-kube-api-access-q4pvs\") pod \"barbican-worker-876d8fd55-vqq67\" (UID: \"8b6515a5-0b19-48c6-8dce-c3765bbe9087\") " pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data-custom\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748727 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-config-data\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748770 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwfs\" (UniqueName: \"kubernetes.io/projected/c3f63aca-a302-4b5a-9aab-df2030cb30a0-kube-api-access-9nwfs\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748844 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-config-data-custom\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4glm\" (UniqueName: \"kubernetes.io/projected/347b7ec4-6cfe-431e-b6b4-70c0933118c6-kube-api-access-v4glm\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-combined-ca-bundle\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748931 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f63aca-a302-4b5a-9aab-df2030cb30a0-logs\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.748955 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-combined-ca-bundle\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.749025 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.749089 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347b7ec4-6cfe-431e-b6b4-70c0933118c6-logs\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.758371 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3f63aca-a302-4b5a-9aab-df2030cb30a0-logs\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.759879 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-config-data\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.763779 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-combined-ca-bundle\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.782281 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwfs\" (UniqueName: \"kubernetes.io/projected/c3f63aca-a302-4b5a-9aab-df2030cb30a0-kube-api-access-9nwfs\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.794154 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3f63aca-a302-4b5a-9aab-df2030cb30a0-config-data-custom\") pod \"barbican-keystone-listener-5994c4b6d-gdq4r\" (UID: \"c3f63aca-a302-4b5a-9aab-df2030cb30a0\") " pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.851012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.851095 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347b7ec4-6cfe-431e-b6b4-70c0933118c6-logs\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.851116 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-876d8fd55-vqq67" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.851145 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data-custom\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.851233 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4glm\" (UniqueName: \"kubernetes.io/projected/347b7ec4-6cfe-431e-b6b4-70c0933118c6-kube-api-access-v4glm\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.851257 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-combined-ca-bundle\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.852706 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347b7ec4-6cfe-431e-b6b4-70c0933118c6-logs\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.858901 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data-custom\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.870565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-combined-ca-bundle\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.871997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:32 crc kubenswrapper[4793]: I0217 20:27:32.875655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4glm\" (UniqueName: \"kubernetes.io/projected/347b7ec4-6cfe-431e-b6b4-70c0933118c6-kube-api-access-v4glm\") pod \"barbican-api-7dbff57d7b-c7kmz\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.015129 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.016486 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6f85865c-9pvnx" event={"ID":"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b","Type":"ContainerStarted","Data":"10b8f0259805c290d81baf03c64f9831963f8990b7133deca753c365f964a5e4"} Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.028131 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.040563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9mdr" event={"ID":"95da6bd5-17d8-4402-bb8a-87b0c03feebf","Type":"ContainerStarted","Data":"58b0cd3fe7288ff4fdf482c0fc5ff538635fd3d320a9e30c90486d2fdd8cd090"} Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.071298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" event={"ID":"5e6703f9-1619-4eb6-9a19-0e82153f6979","Type":"ContainerStarted","Data":"7c219ac09522113c74a0782944a92f55bde036bd9e6b003ecd587795ee0e5246"} Feb 17 20:27:33 crc kubenswrapper[4793]: W0217 20:27:33.092016 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc06f6156_bc74_4ece_88f4_09626b053748.slice/crio-cfdcccdea881e2209ea2b0eb3a4e965e2a75d71168892d9f0b828eb571627cb7 WatchSource:0}: Error finding container cfdcccdea881e2209ea2b0eb3a4e965e2a75d71168892d9f0b828eb571627cb7: Status 404 returned error can't find the container with id cfdcccdea881e2209ea2b0eb3a4e965e2a75d71168892d9f0b828eb571627cb7 Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.094154 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fddbf755c-j4r2b"] Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.112300 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-t9mdr" podStartSLOduration=5.22860936 podStartE2EDuration="52.112280708s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="2026-02-17 20:26:43.747200029 +0000 UTC m=+1079.038898340" lastFinishedPulling="2026-02-17 20:27:30.630871377 +0000 UTC m=+1125.922569688" observedRunningTime="2026-02-17 20:27:33.060609964 +0000 UTC m=+1128.352308295" watchObservedRunningTime="2026-02-17 20:27:33.112280708 +0000 UTC m=+1128.403979019" Feb 17 20:27:33 crc kubenswrapper[4793]: E0217 20:27:33.188520 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.290790 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.607077 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdf7fdb-aef6-4c17-a051-ba5320d07ea7" path="/var/lib/kubelet/pods/5fdf7fdb-aef6-4c17-a051-ba5320d07ea7/volumes" Feb 17 20:27:33 crc kubenswrapper[4793]: W0217 20:27:33.609608 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6290a12_a807_46af_adaa_bafaf7bbb26f.slice/crio-a42587fd4c4f39e7c65fa6eb8575e2b516b0d424776d7588a1d52321e37deda0 WatchSource:0}: Error finding container a42587fd4c4f39e7c65fa6eb8575e2b516b0d424776d7588a1d52321e37deda0: Status 404 returned error can't find the container with id a42587fd4c4f39e7c65fa6eb8575e2b516b0d424776d7588a1d52321e37deda0 Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.609811 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79779c64b9-54jgr"] Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.626336 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6764d4c5d4-r2vt4"] Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.635798 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6754ff86f4-gttvc"] Feb 17 20:27:33 crc kubenswrapper[4793]: W0217 20:27:33.683174 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66324f49_be26_4cca_a237_cf6a31ab771f.slice/crio-72d15753e29ea196fb3c22db76265d3340787c95c300958c85bb9860a2a9a766 WatchSource:0}: Error finding container 72d15753e29ea196fb3c22db76265d3340787c95c300958c85bb9860a2a9a766: Status 404 returned error can't find the container with id 72d15753e29ea196fb3c22db76265d3340787c95c300958c85bb9860a2a9a766 Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.782569 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5994c4b6d-gdq4r"] Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.804332 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-876d8fd55-vqq67"] Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.843667 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 17 20:27:33 crc kubenswrapper[4793]: W0217 20:27:33.860933 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b6515a5_0b19_48c6_8dce_c3765bbe9087.slice/crio-4d68b233a5059fa832659a691b155493a7be785afd8628d4387ccb0b0432d34c WatchSource:0}: Error finding container 4d68b233a5059fa832659a691b155493a7be785afd8628d4387ccb0b0432d34c: Status 404 returned error can't find the container with id 4d68b233a5059fa832659a691b155493a7be785afd8628d4387ccb0b0432d34c Feb 17 20:27:33 crc kubenswrapper[4793]: W0217 20:27:33.874775 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f63aca_a302_4b5a_9aab_df2030cb30a0.slice/crio-d1ac2e5a62afc6a4b1fd4c3b92461b24594725386632112813a12daf47dcdb22 WatchSource:0}: Error finding container d1ac2e5a62afc6a4b1fd4c3b92461b24594725386632112813a12daf47dcdb22: Status 404 returned error can't find the container with id d1ac2e5a62afc6a4b1fd4c3b92461b24594725386632112813a12daf47dcdb22 Feb 17 20:27:33 crc kubenswrapper[4793]: I0217 20:27:33.955293 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dbff57d7b-c7kmz"] Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.100083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dbff57d7b-c7kmz" event={"ID":"347b7ec4-6cfe-431e-b6b4-70c0933118c6","Type":"ContainerStarted","Data":"8627d650d994bd22f858ee66947e21732ce07ed9d942bf203f3ec9b8a0a7eb77"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.105197 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6754ff86f4-gttvc" event={"ID":"66324f49-be26-4cca-a237-cf6a31ab771f","Type":"ContainerStarted","Data":"72d15753e29ea196fb3c22db76265d3340787c95c300958c85bb9860a2a9a766"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.109143 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerStarted","Data":"0181b217e76523c8a70ed84dfc08fc0f5d04270a537307e4a06fd2e050ca4abd"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.109185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerStarted","Data":"08be7c22df6b5d5dbf619279440963b317cfbb1a74e78b93ea0859f96f5ddcf3"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.110915 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-876d8fd55-vqq67" event={"ID":"8b6515a5-0b19-48c6-8dce-c3765bbe9087","Type":"ContainerStarted","Data":"4d68b233a5059fa832659a691b155493a7be785afd8628d4387ccb0b0432d34c"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.114026 4793 generic.go:334] "Generic (PLEG): container finished" podID="c06f6156-bc74-4ece-88f4-09626b053748" containerID="9776d3ecfbe5b97f0bb68685974d2b54452a70789e84f583c320b49e1c141337" exitCode=0 Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.114531 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" event={"ID":"c06f6156-bc74-4ece-88f4-09626b053748","Type":"ContainerDied","Data":"9776d3ecfbe5b97f0bb68685974d2b54452a70789e84f583c320b49e1c141337"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.114564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" event={"ID":"c06f6156-bc74-4ece-88f4-09626b053748","Type":"ContainerStarted","Data":"cfdcccdea881e2209ea2b0eb3a4e965e2a75d71168892d9f0b828eb571627cb7"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.117321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"71c55328-b65e-4471-b5a4-228ae3dbeb8d","Type":"ContainerStarted","Data":"3e03f4590858deb35a6eaa362110f2d6752b4ec4f8d31a80d9545532c541f61e"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.118941 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6764d4c5d4-r2vt4" event={"ID":"c6290a12-a807-46af-adaa-bafaf7bbb26f","Type":"ContainerStarted","Data":"a42587fd4c4f39e7c65fa6eb8575e2b516b0d424776d7588a1d52321e37deda0"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.120188 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" event={"ID":"c3f63aca-a302-4b5a-9aab-df2030cb30a0","Type":"ContainerStarted","Data":"d1ac2e5a62afc6a4b1fd4c3b92461b24594725386632112813a12daf47dcdb22"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.121804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79779c64b9-54jgr" event={"ID":"a630279e-31d3-4ab4-88f2-a06edcb58dee","Type":"ContainerStarted","Data":"349a192ee7a608980f27ce50af3b8a7a29500cec7010db49ca162be537702d30"} Feb 17 20:27:34 crc kubenswrapper[4793]: I0217 20:27:34.138904 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.138867378 podStartE2EDuration="3.138867378s" podCreationTimestamp="2026-02-17 20:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:34.134445008 +0000 UTC m=+1129.426143319" watchObservedRunningTime="2026-02-17 20:27:34.138867378 +0000 UTC m=+1129.430565689" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.188600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dbff57d7b-c7kmz" event={"ID":"347b7ec4-6cfe-431e-b6b4-70c0933118c6","Type":"ContainerStarted","Data":"a2ceda8fd39a91bac69fddf8087d5237ff44b350f39853603c6ecb66865b9008"} Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.196181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6764d4c5d4-r2vt4" event={"ID":"c6290a12-a807-46af-adaa-bafaf7bbb26f","Type":"ContainerStarted","Data":"aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe"} Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.198048 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79779c64b9-54jgr" event={"ID":"a630279e-31d3-4ab4-88f2-a06edcb58dee","Type":"ContainerStarted","Data":"ff58469d8b285582ea96e516654432a346613c63e7d98ae39fd0735eef08902c"} Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.232916 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79779c64b9-54jgr" podStartSLOduration=4.232895703 podStartE2EDuration="4.232895703s" podCreationTimestamp="2026-02-17 20:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:35.223005087 +0000 UTC m=+1130.514703398" watchObservedRunningTime="2026-02-17 20:27:35.232895703 +0000 UTC m=+1130.524594014" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.332820 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6764d4c5d4-r2vt4"] Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.364813 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5bc6776d6b-sktrk"] Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.370611 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.377304 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.385887 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.387774 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5bc6776d6b-sktrk"] Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-config-data\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538295 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-public-tls-certs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538338 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-combined-ca-bundle\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538373 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-internal-tls-certs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538388 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-logs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538459 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bswl4\" (UniqueName: \"kubernetes.io/projected/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-kube-api-access-bswl4\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.538505 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-config-data-custom\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.641489 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bswl4\" (UniqueName: \"kubernetes.io/projected/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-kube-api-access-bswl4\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.642281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-config-data-custom\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.642424 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-config-data\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.642503 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-public-tls-certs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.642602 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-combined-ca-bundle\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.642715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-internal-tls-certs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.642786 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-logs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.644317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-logs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.650117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-config-data\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.651257 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-config-data-custom\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.653735 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-public-tls-certs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.657863 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-internal-tls-certs\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.662560 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bswl4\" (UniqueName: \"kubernetes.io/projected/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-kube-api-access-bswl4\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.662591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6-combined-ca-bundle\") pod \"barbican-api-5bc6776d6b-sktrk\" (UID: \"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6\") " pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:35 crc kubenswrapper[4793]: I0217 20:27:35.693922 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:36 crc kubenswrapper[4793]: I0217 20:27:36.209171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"71c55328-b65e-4471-b5a4-228ae3dbeb8d","Type":"ContainerStarted","Data":"a8259f47fcd5774bfff395f96a05f53de52d521b51f5a2aad6ff343d17a1d14c"} Feb 17 20:27:36 crc kubenswrapper[4793]: I0217 20:27:36.211396 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6754ff86f4-gttvc" event={"ID":"66324f49-be26-4cca-a237-cf6a31ab771f","Type":"ContainerStarted","Data":"a159f40005f0fb7016e43bb8172d4d61432980ff231d5c4fdbf64d884b3e4d53"} Feb 17 20:27:36 crc kubenswrapper[4793]: I0217 20:27:36.211578 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.027864 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5bc6776d6b-sktrk"] Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.233542 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-876d8fd55-vqq67" event={"ID":"8b6515a5-0b19-48c6-8dce-c3765bbe9087","Type":"ContainerStarted","Data":"40d20478e399d554a2f6e6b18ff69fb9393b96b7cefca6f99b22d3068c24b314"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.239301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" event={"ID":"c06f6156-bc74-4ece-88f4-09626b053748","Type":"ContainerStarted","Data":"c42699e08de2696cd98565a913b4a74a472e5c4aeb05d2edb7d19879ba4aadd1"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.239770 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.244154 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" event={"ID":"5e6703f9-1619-4eb6-9a19-0e82153f6979","Type":"ContainerStarted","Data":"d3e45c1321d8b076b3d0ce5aee864c3b3b173655c2823133893eddf3393b6049"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.248231 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"71c55328-b65e-4471-b5a4-228ae3dbeb8d","Type":"ContainerStarted","Data":"7e7d5e17e0da953faa3ca1aa9411f531fefc1c8f70083520731473410b582887"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.248606 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.253186 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="71c55328-b65e-4471-b5a4-228ae3dbeb8d" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.180:9322/\": dial tcp 10.217.0.180:9322: connect: connection refused" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.256853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6764d4c5d4-r2vt4" event={"ID":"c6290a12-a807-46af-adaa-bafaf7bbb26f","Type":"ContainerStarted","Data":"e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.257068 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6764d4c5d4-r2vt4" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api-log" containerID="cri-o://aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe" gracePeriod=30 Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.257151 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.257167 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.257195 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6764d4c5d4-r2vt4" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api" containerID="cri-o://e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7" gracePeriod=30 Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.267792 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" podStartSLOduration=6.267774259 podStartE2EDuration="6.267774259s" podCreationTimestamp="2026-02-17 20:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:37.257621656 +0000 UTC m=+1132.549319977" watchObservedRunningTime="2026-02-17 20:27:37.267774259 +0000 UTC m=+1132.559472570" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.271356 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6754ff86f4-gttvc" event={"ID":"66324f49-be26-4cca-a237-cf6a31ab771f","Type":"ContainerStarted","Data":"82c7dd80439b9e2588c2e0e26682e953de661e158ea27498460087c68f291070"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.272262 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.272473 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.291662 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6764d4c5d4-r2vt4" podStartSLOduration=6.291643122 podStartE2EDuration="6.291643122s" podCreationTimestamp="2026-02-17 20:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:37.284570236 +0000 UTC m=+1132.576268547" watchObservedRunningTime="2026-02-17 20:27:37.291643122 +0000 UTC m=+1132.583341433" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.291901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bc6776d6b-sktrk" event={"ID":"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6","Type":"ContainerStarted","Data":"beaa2b4632c85392cfb4cd576312e315edebfd44855e836cfd97a975d8d71400"} Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.357376 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.357358145 podStartE2EDuration="5.357358145s" podCreationTimestamp="2026-02-17 20:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:37.302609154 +0000 UTC m=+1132.594307465" watchObservedRunningTime="2026-02-17 20:27:37.357358145 +0000 UTC m=+1132.649056456" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.363287 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6754ff86f4-gttvc" podStartSLOduration=6.363271552 podStartE2EDuration="6.363271552s" podCreationTimestamp="2026-02-17 20:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:37.335181704 +0000 UTC m=+1132.626880015" watchObservedRunningTime="2026-02-17 20:27:37.363271552 +0000 UTC m=+1132.654969873" Feb 17 20:27:37 crc kubenswrapper[4793]: I0217 20:27:37.488561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.306011 4793 generic.go:334] "Generic (PLEG): container finished" podID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerID="0181b217e76523c8a70ed84dfc08fc0f5d04270a537307e4a06fd2e050ca4abd" exitCode=1 Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.306251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerDied","Data":"0181b217e76523c8a70ed84dfc08fc0f5d04270a537307e4a06fd2e050ca4abd"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.307394 4793 scope.go:117] "RemoveContainer" containerID="0181b217e76523c8a70ed84dfc08fc0f5d04270a537307e4a06fd2e050ca4abd" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.308996 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-876d8fd55-vqq67" event={"ID":"8b6515a5-0b19-48c6-8dce-c3765bbe9087","Type":"ContainerStarted","Data":"da69943acf316bca2d068d471877872391b2fe20f4fd00b579c611336a55932f"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.313033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" event={"ID":"5e6703f9-1619-4eb6-9a19-0e82153f6979","Type":"ContainerStarted","Data":"b6e0c076ba15a254ad90581164116f0b41f9185ec18059e65115b59b0799e07d"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.315462 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dbff57d7b-c7kmz" event={"ID":"347b7ec4-6cfe-431e-b6b4-70c0933118c6","Type":"ContainerStarted","Data":"58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.316057 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.316097 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.330799 4793 generic.go:334] "Generic (PLEG): container finished" podID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerID="aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe" exitCode=143 Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.330866 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6764d4c5d4-r2vt4" event={"ID":"c6290a12-a807-46af-adaa-bafaf7bbb26f","Type":"ContainerDied","Data":"aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.338898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" event={"ID":"c3f63aca-a302-4b5a-9aab-df2030cb30a0","Type":"ContainerStarted","Data":"fab2a271136eccab49319c662fb392c7f6477265850ca12b564b74483e994f21"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.338948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" event={"ID":"c3f63aca-a302-4b5a-9aab-df2030cb30a0","Type":"ContainerStarted","Data":"0e9866fb1ad02db739a08e810ecb2d34d51cf18730eb2ef42a2593ce2258792e"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.345734 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6f85865c-9pvnx" event={"ID":"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b","Type":"ContainerStarted","Data":"3c70542f1e1b1df11a801f2057afe51ae3b89ee7f561476b01cb4d1eb5b83015"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.345775 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6f85865c-9pvnx" event={"ID":"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b","Type":"ContainerStarted","Data":"a24e9f49ac78e53ad807ec69e4478d6b1ad618723f753200d9163156ff7b8c98"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.350214 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bc6776d6b-sktrk" event={"ID":"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6","Type":"ContainerStarted","Data":"6132680f88e43920e7c7a661b45cc068e46923cb005f04a107127c408578816e"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.350269 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bc6776d6b-sktrk" event={"ID":"5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6","Type":"ContainerStarted","Data":"36ae714c40e3d4d011633f6b638eb1257e0cc198ba73350b8aca2c594b44b2df"} Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.354149 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.354384 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.383609 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-876d8fd55-vqq67" podStartSLOduration=3.737311078 podStartE2EDuration="6.383589785s" podCreationTimestamp="2026-02-17 20:27:32 +0000 UTC" firstStartedPulling="2026-02-17 20:27:33.905080359 +0000 UTC m=+1129.196778670" lastFinishedPulling="2026-02-17 20:27:36.551359066 +0000 UTC m=+1131.843057377" observedRunningTime="2026-02-17 20:27:38.342416712 +0000 UTC m=+1133.634115023" watchObservedRunningTime="2026-02-17 20:27:38.383589785 +0000 UTC m=+1133.675288096" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.391511 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" podStartSLOduration=4.478595349 podStartE2EDuration="8.391489611s" podCreationTimestamp="2026-02-17 20:27:30 +0000 UTC" firstStartedPulling="2026-02-17 20:27:32.744253573 +0000 UTC m=+1128.035951884" lastFinishedPulling="2026-02-17 20:27:36.657147835 +0000 UTC m=+1131.948846146" observedRunningTime="2026-02-17 20:27:38.385109513 +0000 UTC m=+1133.676807824" watchObservedRunningTime="2026-02-17 20:27:38.391489611 +0000 UTC m=+1133.683187922" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.406816 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-d6f85865c-9pvnx"] Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.437720 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dbff57d7b-c7kmz" podStartSLOduration=6.437679149 podStartE2EDuration="6.437679149s" podCreationTimestamp="2026-02-17 20:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:38.414105993 +0000 UTC m=+1133.705804304" watchObservedRunningTime="2026-02-17 20:27:38.437679149 +0000 UTC m=+1133.729377460" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.441494 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d6f85865c-9pvnx" podStartSLOduration=4.328471548 podStartE2EDuration="8.441482753s" podCreationTimestamp="2026-02-17 20:27:30 +0000 UTC" firstStartedPulling="2026-02-17 20:27:32.43909173 +0000 UTC m=+1127.730790041" lastFinishedPulling="2026-02-17 20:27:36.552102935 +0000 UTC m=+1131.843801246" observedRunningTime="2026-02-17 20:27:38.439681679 +0000 UTC m=+1133.731379990" watchObservedRunningTime="2026-02-17 20:27:38.441482753 +0000 UTC m=+1133.733181064" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.475796 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5994c4b6d-gdq4r" podStartSLOduration=3.776456291 podStartE2EDuration="6.475779886s" podCreationTimestamp="2026-02-17 20:27:32 +0000 UTC" firstStartedPulling="2026-02-17 20:27:33.904800392 +0000 UTC m=+1129.196498693" lastFinishedPulling="2026-02-17 20:27:36.604123977 +0000 UTC m=+1131.895822288" observedRunningTime="2026-02-17 20:27:38.458610369 +0000 UTC m=+1133.750308670" watchObservedRunningTime="2026-02-17 20:27:38.475779886 +0000 UTC m=+1133.767478187" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.485534 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5bc6776d6b-sktrk" podStartSLOduration=3.485521828 podStartE2EDuration="3.485521828s" podCreationTimestamp="2026-02-17 20:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:38.480744319 +0000 UTC m=+1133.772442630" watchObservedRunningTime="2026-02-17 20:27:38.485521828 +0000 UTC m=+1133.777220139" Feb 17 20:27:38 crc kubenswrapper[4793]: I0217 20:27:38.511602 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8559cccd8-9zlg9"] Feb 17 20:27:39 crc kubenswrapper[4793]: I0217 20:27:39.374448 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerStarted","Data":"dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6"} Feb 17 20:27:39 crc kubenswrapper[4793]: I0217 20:27:39.376214 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.381073 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-d6f85865c-9pvnx" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker-log" containerID="cri-o://a24e9f49ac78e53ad807ec69e4478d6b1ad618723f753200d9163156ff7b8c98" gracePeriod=30 Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.382060 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-d6f85865c-9pvnx" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker" containerID="cri-o://3c70542f1e1b1df11a801f2057afe51ae3b89ee7f561476b01cb4d1eb5b83015" gracePeriod=30 Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.382367 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener-log" containerID="cri-o://d3e45c1321d8b076b3d0ce5aee864c3b3b173655c2823133893eddf3393b6049" gracePeriod=30 Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.382446 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener" containerID="cri-o://b6e0c076ba15a254ad90581164116f0b41f9185ec18059e65115b59b0799e07d" gracePeriod=30 Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.746307 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.991347 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 20:27:40 crc kubenswrapper[4793]: I0217 20:27:40.991401 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.036523 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.049975 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.226908 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.226973 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.286908 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.297990 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dbff57d7b-c7kmz" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.302488 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.391318 4793 generic.go:334] "Generic (PLEG): container finished" podID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerID="3c70542f1e1b1df11a801f2057afe51ae3b89ee7f561476b01cb4d1eb5b83015" exitCode=0 Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.391357 4793 generic.go:334] "Generic (PLEG): container finished" podID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerID="a24e9f49ac78e53ad807ec69e4478d6b1ad618723f753200d9163156ff7b8c98" exitCode=143 Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.391384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6f85865c-9pvnx" event={"ID":"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b","Type":"ContainerDied","Data":"3c70542f1e1b1df11a801f2057afe51ae3b89ee7f561476b01cb4d1eb5b83015"} Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.391427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6f85865c-9pvnx" event={"ID":"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b","Type":"ContainerDied","Data":"a24e9f49ac78e53ad807ec69e4478d6b1ad618723f753200d9163156ff7b8c98"} Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.393499 4793 generic.go:334] "Generic (PLEG): container finished" podID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" containerID="58b0cd3fe7288ff4fdf482c0fc5ff538635fd3d320a9e30c90486d2fdd8cd090" exitCode=0 Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.393564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9mdr" event={"ID":"95da6bd5-17d8-4402-bb8a-87b0c03feebf","Type":"ContainerDied","Data":"58b0cd3fe7288ff4fdf482c0fc5ff538635fd3d320a9e30c90486d2fdd8cd090"} Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.395933 4793 generic.go:334] "Generic (PLEG): container finished" podID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerID="b6e0c076ba15a254ad90581164116f0b41f9185ec18059e65115b59b0799e07d" exitCode=0 Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.395958 4793 generic.go:334] "Generic (PLEG): container finished" podID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerID="d3e45c1321d8b076b3d0ce5aee864c3b3b173655c2823133893eddf3393b6049" exitCode=143 Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.395988 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" event={"ID":"5e6703f9-1619-4eb6-9a19-0e82153f6979","Type":"ContainerDied","Data":"b6e0c076ba15a254ad90581164116f0b41f9185ec18059e65115b59b0799e07d"} Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.396049 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" event={"ID":"5e6703f9-1619-4eb6-9a19-0e82153f6979","Type":"ContainerDied","Data":"d3e45c1321d8b076b3d0ce5aee864c3b3b173655c2823133893eddf3393b6049"} Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.396707 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.397950 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.397967 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.397978 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 20:27:41 crc kubenswrapper[4793]: I0217 20:27:41.900529 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:41 crc kubenswrapper[4793]: E0217 20:27:41.901065 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6 is running failed: container process not found" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 17 20:27:41 crc kubenswrapper[4793]: E0217 20:27:41.901392 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6 is running failed: container process not found" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 17 20:27:41 crc kubenswrapper[4793]: E0217 20:27:41.901904 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6 is running failed: container process not found" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 17 20:27:41 crc kubenswrapper[4793]: E0217 20:27:41.901932 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.428272 4793 generic.go:334] "Generic (PLEG): container finished" podID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" exitCode=1 Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.428336 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerDied","Data":"dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6"} Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.428680 4793 scope.go:117] "RemoveContainer" containerID="0181b217e76523c8a70ed84dfc08fc0f5d04270a537307e4a06fd2e050ca4abd" Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.429482 4793 scope.go:117] "RemoveContainer" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" Feb 17 20:27:42 crc kubenswrapper[4793]: E0217 20:27:42.429778 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(aedc67e8-05ec-44a4-b1f2-a18d2fde80b2)\"" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.489181 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.520076 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 17 20:27:42 crc kubenswrapper[4793]: I0217 20:27:42.801321 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.439193 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.439468 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.439714 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.439739 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:43 crc kubenswrapper[4793]: E0217 20:27:43.443114 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573ed33_28a6_4a91_a8ca_e5d1c87b2f60.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.493464 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.539100 4793 scope.go:117] "RemoveContainer" containerID="a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.564196 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.763388 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830348 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95da6bd5-17d8-4402-bb8a-87b0c03feebf-etc-machine-id\") pod \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830433 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98mr\" (UniqueName: \"kubernetes.io/projected/95da6bd5-17d8-4402-bb8a-87b0c03feebf-kube-api-access-v98mr\") pod \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830466 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-combined-ca-bundle\") pod \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830499 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95da6bd5-17d8-4402-bb8a-87b0c03feebf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "95da6bd5-17d8-4402-bb8a-87b0c03feebf" (UID: "95da6bd5-17d8-4402-bb8a-87b0c03feebf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830566 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-scripts\") pod \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830619 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-config-data\") pod \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.830758 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-db-sync-config-data\") pod \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\" (UID: \"95da6bd5-17d8-4402-bb8a-87b0c03feebf\") " Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.833662 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95da6bd5-17d8-4402-bb8a-87b0c03feebf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.840815 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "95da6bd5-17d8-4402-bb8a-87b0c03feebf" (UID: "95da6bd5-17d8-4402-bb8a-87b0c03feebf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.852000 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-scripts" (OuterVolumeSpecName: "scripts") pod "95da6bd5-17d8-4402-bb8a-87b0c03feebf" (UID: "95da6bd5-17d8-4402-bb8a-87b0c03feebf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.855220 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95da6bd5-17d8-4402-bb8a-87b0c03feebf-kube-api-access-v98mr" (OuterVolumeSpecName: "kube-api-access-v98mr") pod "95da6bd5-17d8-4402-bb8a-87b0c03feebf" (UID: "95da6bd5-17d8-4402-bb8a-87b0c03feebf"). InnerVolumeSpecName "kube-api-access-v98mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.868100 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95da6bd5-17d8-4402-bb8a-87b0c03feebf" (UID: "95da6bd5-17d8-4402-bb8a-87b0c03feebf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.884526 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-config-data" (OuterVolumeSpecName: "config-data") pod "95da6bd5-17d8-4402-bb8a-87b0c03feebf" (UID: "95da6bd5-17d8-4402-bb8a-87b0c03feebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.934875 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.934901 4793 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.934914 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98mr\" (UniqueName: \"kubernetes.io/projected/95da6bd5-17d8-4402-bb8a-87b0c03feebf-kube-api-access-v98mr\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.934922 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:43 crc kubenswrapper[4793]: I0217 20:27:43.934930 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95da6bd5-17d8-4402-bb8a-87b0c03feebf-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.243506 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.460159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-t9mdr" event={"ID":"95da6bd5-17d8-4402-bb8a-87b0c03feebf","Type":"ContainerDied","Data":"88a2844af8ee5f653809a28e6f3870f34c40fb524f51c4d892095c2a5c11b5fc"} Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.460202 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a2844af8ee5f653809a28e6f3870f34c40fb524f51c4d892095c2a5c11b5fc" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.460257 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-t9mdr" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.500179 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.500728 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.915050 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.915424 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.919517 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.931029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:44 crc kubenswrapper[4793]: I0217 20:27:44.931116 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.030119 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.095785 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:45 crc kubenswrapper[4793]: E0217 20:27:45.096274 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" containerName="cinder-db-sync" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.096287 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" containerName="cinder-db-sync" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.096459 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" containerName="cinder-db-sync" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.097441 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.106561 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.107010 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.107224 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.107329 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l2xdk" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.121656 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.164476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e37eff3-1014-4cfd-9015-f51d8d7c4026-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.164538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8dkq\" (UniqueName: \"kubernetes.io/projected/6e37eff3-1014-4cfd-9015-f51d8d7c4026-kube-api-access-q8dkq\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.164562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.164656 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.164673 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.164722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.191247 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.227517 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fddbf755c-j4r2b"] Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.228077 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" podUID="c06f6156-bc74-4ece-88f4-09626b053748" containerName="dnsmasq-dns" containerID="cri-o://c42699e08de2696cd98565a913b4a74a472e5c4aeb05d2edb7d19879ba4aadd1" gracePeriod=10 Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.235909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.270506 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6703f9-1619-4eb6-9a19-0e82153f6979-logs\") pod \"5e6703f9-1619-4eb6-9a19-0e82153f6979\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.270937 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crr87\" (UniqueName: \"kubernetes.io/projected/5e6703f9-1619-4eb6-9a19-0e82153f6979-kube-api-access-crr87\") pod \"5e6703f9-1619-4eb6-9a19-0e82153f6979\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.271059 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data\") pod \"5e6703f9-1619-4eb6-9a19-0e82153f6979\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.271209 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data-custom\") pod \"5e6703f9-1619-4eb6-9a19-0e82153f6979\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.271320 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-combined-ca-bundle\") pod \"5e6703f9-1619-4eb6-9a19-0e82153f6979\" (UID: \"5e6703f9-1619-4eb6-9a19-0e82153f6979\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.271871 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.275131 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6703f9-1619-4eb6-9a19-0e82153f6979-logs" (OuterVolumeSpecName: "logs") pod "5e6703f9-1619-4eb6-9a19-0e82153f6979" (UID: "5e6703f9-1619-4eb6-9a19-0e82153f6979"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.283783 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.285674 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.285940 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e37eff3-1014-4cfd-9015-f51d8d7c4026-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.286119 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8dkq\" (UniqueName: \"kubernetes.io/projected/6e37eff3-1014-4cfd-9015-f51d8d7c4026-kube-api-access-q8dkq\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.286252 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.286539 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6703f9-1619-4eb6-9a19-0e82153f6979-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.287009 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e37eff3-1014-4cfd-9015-f51d8d7c4026-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.302457 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.312417 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574778f449-sqg2x"] Feb 17 20:27:45 crc kubenswrapper[4793]: E0217 20:27:45.312841 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.312853 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener" Feb 17 20:27:45 crc kubenswrapper[4793]: E0217 20:27:45.312868 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener-log" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.312874 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener-log" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.313067 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener-log" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.313077 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" containerName="barbican-keystone-listener" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.314051 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.315251 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.315584 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.329898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.330902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e6703f9-1619-4eb6-9a19-0e82153f6979" (UID: "5e6703f9-1619-4eb6-9a19-0e82153f6979"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.344004 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8dkq\" (UniqueName: \"kubernetes.io/projected/6e37eff3-1014-4cfd-9015-f51d8d7c4026-kube-api-access-q8dkq\") pod \"cinder-scheduler-0\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.347733 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574778f449-sqg2x"] Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.374749 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.376366 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.381854 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388564 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-nb\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388664 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-swift-storage-0\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388749 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-sb\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmlk\" (UniqueName: \"kubernetes.io/projected/604ca62b-98a7-4023-b6fc-de75724d84a9-kube-api-access-pwmlk\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388806 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-svc\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388828 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-config\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.388873 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.391198 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.398804 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6703f9-1619-4eb6-9a19-0e82153f6979-kube-api-access-crr87" (OuterVolumeSpecName: "kube-api-access-crr87") pod "5e6703f9-1619-4eb6-9a19-0e82153f6979" (UID: "5e6703f9-1619-4eb6-9a19-0e82153f6979"). InnerVolumeSpecName "kube-api-access-crr87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.478095 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.479763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d6f85865c-9pvnx" event={"ID":"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b","Type":"ContainerDied","Data":"10b8f0259805c290d81baf03c64f9831963f8990b7133deca753c365f964a5e4"} Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.479806 4793 scope.go:117] "RemoveContainer" containerID="3c70542f1e1b1df11a801f2057afe51ae3b89ee7f561476b01cb4d1eb5b83015" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.489973 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-scripts\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490027 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490087 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-swift-storage-0\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490144 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490159 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490181 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a90865-6b9b-4232-b918-9830565d998a-logs\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-sb\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490245 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmlk\" (UniqueName: \"kubernetes.io/projected/604ca62b-98a7-4023-b6fc-de75724d84a9-kube-api-access-pwmlk\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490265 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqm64\" (UniqueName: \"kubernetes.io/projected/e2a90865-6b9b-4232-b918-9830565d998a-kube-api-access-wqm64\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490288 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-svc\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-config\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490342 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a90865-6b9b-4232-b918-9830565d998a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490381 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-nb\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490437 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crr87\" (UniqueName: \"kubernetes.io/projected/5e6703f9-1619-4eb6-9a19-0e82153f6979-kube-api-access-crr87\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.490990 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-swift-storage-0\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.492786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-nb\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.492878 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-svc\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.492959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-config\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.497412 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-sb\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.501414 4793 generic.go:334] "Generic (PLEG): container finished" podID="c06f6156-bc74-4ece-88f4-09626b053748" containerID="c42699e08de2696cd98565a913b4a74a472e5c4aeb05d2edb7d19879ba4aadd1" exitCode=0 Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.501559 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" event={"ID":"c06f6156-bc74-4ece-88f4-09626b053748","Type":"ContainerDied","Data":"c42699e08de2696cd98565a913b4a74a472e5c4aeb05d2edb7d19879ba4aadd1"} Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.504837 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.506823 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8559cccd8-9zlg9" event={"ID":"5e6703f9-1619-4eb6-9a19-0e82153f6979","Type":"ContainerDied","Data":"7c219ac09522113c74a0782944a92f55bde036bd9e6b003ecd587795ee0e5246"} Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.529261 4793 scope.go:117] "RemoveContainer" containerID="a24e9f49ac78e53ad807ec69e4478d6b1ad618723f753200d9163156ff7b8c98" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.537723 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l2xdk" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.539386 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmlk\" (UniqueName: \"kubernetes.io/projected/604ca62b-98a7-4023-b6fc-de75724d84a9-kube-api-access-pwmlk\") pod \"dnsmasq-dns-574778f449-sqg2x\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.556303 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.574999 4793 scope.go:117] "RemoveContainer" containerID="b6e0c076ba15a254ad90581164116f0b41f9185ec18059e65115b59b0799e07d" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.597055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data-custom\") pod \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.597156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-logs\") pod \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.597223 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-combined-ca-bundle\") pod \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.597312 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data\") pod \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.597361 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7xct\" (UniqueName: \"kubernetes.io/projected/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-kube-api-access-m7xct\") pod \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\" (UID: \"cfbd5eaf-abcf-440a-a1d9-c17a8a89028b\") " Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.598035 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.598064 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.598096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a90865-6b9b-4232-b918-9830565d998a-logs\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.611414 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqm64\" (UniqueName: \"kubernetes.io/projected/e2a90865-6b9b-4232-b918-9830565d998a-kube-api-access-wqm64\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.611512 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a90865-6b9b-4232-b918-9830565d998a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.611623 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-scripts\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.611734 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.612074 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a90865-6b9b-4232-b918-9830565d998a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.612458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a90865-6b9b-4232-b918-9830565d998a-logs\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.604349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-logs" (OuterVolumeSpecName: "logs") pod "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" (UID: "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.618998 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.637133 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.644087 4793 scope.go:117] "RemoveContainer" containerID="d3e45c1321d8b076b3d0ce5aee864c3b3b173655c2823133893eddf3393b6049" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.647768 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" (UID: "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.648061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.648237 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-scripts\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.654903 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-kube-api-access-m7xct" (OuterVolumeSpecName: "kube-api-access-m7xct") pod "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" (UID: "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b"). InnerVolumeSpecName "kube-api-access-m7xct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.655540 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.663365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqm64\" (UniqueName: \"kubernetes.io/projected/e2a90865-6b9b-4232-b918-9830565d998a-kube-api-access-wqm64\") pod \"cinder-api-0\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.664150 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.718880 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7xct\" (UniqueName: \"kubernetes.io/projected/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-kube-api-access-m7xct\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.719131 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.719140 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.753978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data" (OuterVolumeSpecName: "config-data") pod "5e6703f9-1619-4eb6-9a19-0e82153f6979" (UID: "5e6703f9-1619-4eb6-9a19-0e82153f6979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.770498 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.770866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" (UID: "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.831667 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.831828 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.837183 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e6703f9-1619-4eb6-9a19-0e82153f6979" (UID: "5e6703f9-1619-4eb6-9a19-0e82153f6979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.851749 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data" (OuterVolumeSpecName: "config-data") pod "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" (UID: "cfbd5eaf-abcf-440a-a1d9-c17a8a89028b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.942576 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:45 crc kubenswrapper[4793]: I0217 20:27:45.942896 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6703f9-1619-4eb6-9a19-0e82153f6979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.166757 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8559cccd8-9zlg9"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.167929 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.178984 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-8559cccd8-9zlg9"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.229394 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.361856 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-nb\") pod \"c06f6156-bc74-4ece-88f4-09626b053748\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.361921 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-config\") pod \"c06f6156-bc74-4ece-88f4-09626b053748\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.361951 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-swift-storage-0\") pod \"c06f6156-bc74-4ece-88f4-09626b053748\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.362152 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-svc\") pod \"c06f6156-bc74-4ece-88f4-09626b053748\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.362174 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-sb\") pod \"c06f6156-bc74-4ece-88f4-09626b053748\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.362253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zv8\" (UniqueName: \"kubernetes.io/projected/c06f6156-bc74-4ece-88f4-09626b053748-kube-api-access-r7zv8\") pod \"c06f6156-bc74-4ece-88f4-09626b053748\" (UID: \"c06f6156-bc74-4ece-88f4-09626b053748\") " Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.382900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06f6156-bc74-4ece-88f4-09626b053748-kube-api-access-r7zv8" (OuterVolumeSpecName: "kube-api-access-r7zv8") pod "c06f6156-bc74-4ece-88f4-09626b053748" (UID: "c06f6156-bc74-4ece-88f4-09626b053748"). InnerVolumeSpecName "kube-api-access-r7zv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.419084 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c06f6156-bc74-4ece-88f4-09626b053748" (UID: "c06f6156-bc74-4ece-88f4-09626b053748"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.466443 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zv8\" (UniqueName: \"kubernetes.io/projected/c06f6156-bc74-4ece-88f4-09626b053748-kube-api-access-r7zv8\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.466467 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.485703 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c06f6156-bc74-4ece-88f4-09626b053748" (UID: "c06f6156-bc74-4ece-88f4-09626b053748"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.495508 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574778f449-sqg2x"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.497288 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c06f6156-bc74-4ece-88f4-09626b053748" (UID: "c06f6156-bc74-4ece-88f4-09626b053748"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.538026 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.566332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574778f449-sqg2x" event={"ID":"604ca62b-98a7-4023-b6fc-de75724d84a9","Type":"ContainerStarted","Data":"22be82bc414950b755dc32e2b39df7e05196aae63b498fe7acea0d9d73c6637e"} Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.567875 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.567895 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.585780 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8445cc88bf-cdsst"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.586061 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8445cc88bf-cdsst" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-api" containerID="cri-o://76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957" gracePeriod=30 Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.586699 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8445cc88bf-cdsst" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-httpd" containerID="cri-o://df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a" gracePeriod=30 Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.589096 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c06f6156-bc74-4ece-88f4-09626b053748" (UID: "c06f6156-bc74-4ece-88f4-09626b053748"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.617171 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-config" (OuterVolumeSpecName: "config") pod "c06f6156-bc74-4ece-88f4-09626b053748" (UID: "c06f6156-bc74-4ece-88f4-09626b053748"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.617245 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerStarted","Data":"0b108d7a98335cbc634b3b4a55abdaadd0c61ef66ff6641f85ef191c5af4ab2f"} Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.618150 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-central-agent" containerID="cri-o://9c17f4a250e3a88160a2598e66ca6aa415b0fb32eb6a82e88616383f38ee179e" gracePeriod=30 Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.618268 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.618621 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="proxy-httpd" containerID="cri-o://0b108d7a98335cbc634b3b4a55abdaadd0c61ef66ff6641f85ef191c5af4ab2f" gracePeriod=30 Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.618676 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="sg-core" containerID="cri-o://b6f01aab37c47f6d916d53fd22cf995d7a13582bffa3b3be3884a0cae6703a5b" gracePeriod=30 Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.618735 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-notification-agent" containerID="cri-o://7495d119d706da7a2adfc7cf6c4c76c9902e428d87f8b29733d405c14bc05da5" gracePeriod=30 Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.635572 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d8998fd7c-xvl9z"] Feb 17 20:27:46 crc kubenswrapper[4793]: E0217 20:27:46.636355 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06f6156-bc74-4ece-88f4-09626b053748" containerName="dnsmasq-dns" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.636463 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06f6156-bc74-4ece-88f4-09626b053748" containerName="dnsmasq-dns" Feb 17 20:27:46 crc kubenswrapper[4793]: E0217 20:27:46.636550 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06f6156-bc74-4ece-88f4-09626b053748" containerName="init" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.636627 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06f6156-bc74-4ece-88f4-09626b053748" containerName="init" Feb 17 20:27:46 crc kubenswrapper[4793]: E0217 20:27:46.636698 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker-log" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.636761 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker-log" Feb 17 20:27:46 crc kubenswrapper[4793]: E0217 20:27:46.636845 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.636905 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.637165 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker-log" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.637275 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" containerName="barbican-worker" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.637344 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06f6156-bc74-4ece-88f4-09626b053748" containerName="dnsmasq-dns" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.638411 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.650004 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d"} Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.670268 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.670292 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c06f6156-bc74-4ece-88f4-09626b053748-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.684353 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d6f85865c-9pvnx" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.697820 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d8998fd7c-xvl9z"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.709958 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.715001 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8445cc88bf-cdsst" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9696/\": read tcp 10.217.0.2:48260->10.217.0.170:9696: read: connection reset by peer" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.717383 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.595912397 podStartE2EDuration="1m5.717363912s" podCreationTimestamp="2026-02-17 20:26:41 +0000 UTC" firstStartedPulling="2026-02-17 20:26:43.750494581 +0000 UTC m=+1079.042192892" lastFinishedPulling="2026-02-17 20:27:44.871946096 +0000 UTC m=+1140.163644407" observedRunningTime="2026-02-17 20:27:46.652136161 +0000 UTC m=+1141.943834472" watchObservedRunningTime="2026-02-17 20:27:46.717363912 +0000 UTC m=+1142.009062223" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.736299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" event={"ID":"c06f6156-bc74-4ece-88f4-09626b053748","Type":"ContainerDied","Data":"cfdcccdea881e2209ea2b0eb3a4e965e2a75d71168892d9f0b828eb571627cb7"} Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.736343 4793 scope.go:117] "RemoveContainer" containerID="c42699e08de2696cd98565a913b4a74a472e5c4aeb05d2edb7d19879ba4aadd1" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.736459 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fddbf755c-j4r2b" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.772430 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-httpd-config\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.772767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-config\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.772798 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-ovndb-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.772849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-internal-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.772923 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-combined-ca-bundle\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.773005 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-public-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.773027 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp558\" (UniqueName: \"kubernetes.io/projected/c09fdef5-1d53-4792-84d2-3bb953383525-kube-api-access-jp558\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.845267 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-d6f85865c-9pvnx"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.862560 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-d6f85865c-9pvnx"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.873916 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fddbf755c-j4r2b"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875011 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-internal-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875082 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-combined-ca-bundle\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp558\" (UniqueName: \"kubernetes.io/projected/c09fdef5-1d53-4792-84d2-3bb953383525-kube-api-access-jp558\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875161 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-public-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875207 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-httpd-config\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875222 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-config\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.875243 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-ovndb-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.888281 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fddbf755c-j4r2b"] Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.895818 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-httpd-config\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.901054 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-combined-ca-bundle\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.909971 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-config\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.911440 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp558\" (UniqueName: \"kubernetes.io/projected/c09fdef5-1d53-4792-84d2-3bb953383525-kube-api-access-jp558\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.916340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-ovndb-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.918869 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-internal-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.942401 4793 scope.go:117] "RemoveContainer" containerID="9776d3ecfbe5b97f0bb68685974d2b54452a70789e84f583c320b49e1c141337" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.952435 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09fdef5-1d53-4792-84d2-3bb953383525-public-tls-certs\") pod \"neutron-6d8998fd7c-xvl9z\" (UID: \"c09fdef5-1d53-4792-84d2-3bb953383525\") " pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:46 crc kubenswrapper[4793]: I0217 20:27:46.963868 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.140752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.581738 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6703f9-1619-4eb6-9a19-0e82153f6979" path="/var/lib/kubelet/pods/5e6703f9-1619-4eb6-9a19-0e82153f6979/volumes" Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.583120 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06f6156-bc74-4ece-88f4-09626b053748" path="/var/lib/kubelet/pods/c06f6156-bc74-4ece-88f4-09626b053748/volumes" Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.584431 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfbd5eaf-abcf-440a-a1d9-c17a8a89028b" path="/var/lib/kubelet/pods/cfbd5eaf-abcf-440a-a1d9-c17a8a89028b/volumes" Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.816336 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d8998fd7c-xvl9z"] Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.826955 4793 generic.go:334] "Generic (PLEG): container finished" podID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerID="0b108d7a98335cbc634b3b4a55abdaadd0c61ef66ff6641f85ef191c5af4ab2f" exitCode=0 Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.826990 4793 generic.go:334] "Generic (PLEG): container finished" podID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerID="b6f01aab37c47f6d916d53fd22cf995d7a13582bffa3b3be3884a0cae6703a5b" exitCode=2 Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.827001 4793 generic.go:334] "Generic (PLEG): container finished" podID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerID="7495d119d706da7a2adfc7cf6c4c76c9902e428d87f8b29733d405c14bc05da5" exitCode=0 Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.827008 4793 generic.go:334] "Generic (PLEG): container finished" podID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerID="9c17f4a250e3a88160a2598e66ca6aa415b0fb32eb6a82e88616383f38ee179e" exitCode=0 Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.827085 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerDied","Data":"0b108d7a98335cbc634b3b4a55abdaadd0c61ef66ff6641f85ef191c5af4ab2f"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.827117 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerDied","Data":"b6f01aab37c47f6d916d53fd22cf995d7a13582bffa3b3be3884a0cae6703a5b"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.827127 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerDied","Data":"7495d119d706da7a2adfc7cf6c4c76c9902e428d87f8b29733d405c14bc05da5"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.827136 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerDied","Data":"9c17f4a250e3a88160a2598e66ca6aa415b0fb32eb6a82e88616383f38ee179e"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.840936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a90865-6b9b-4232-b918-9830565d998a","Type":"ContainerStarted","Data":"ce0a26d1ef791edbb76927efb33b02d80aab1bacab669476f1532c281d9903a9"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.854243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e37eff3-1014-4cfd-9015-f51d8d7c4026","Type":"ContainerStarted","Data":"7c96282d3c3074442255879366aa474b42ad3d31469b8e46de177b1f58947f97"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.902489 4793 generic.go:334] "Generic (PLEG): container finished" podID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerID="df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a" exitCode=0 Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.902766 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445cc88bf-cdsst" event={"ID":"24893fdf-f7bb-4be7-b5f9-edde49088bbe","Type":"ContainerDied","Data":"df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a"} Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.969200 4793 generic.go:334] "Generic (PLEG): container finished" podID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerID="7f09835dd2cf6679726eb3d72b0180e5308fc3a10ebfba5a8337fb2c49213861" exitCode=0 Feb 17 20:27:47 crc kubenswrapper[4793]: I0217 20:27:47.970084 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574778f449-sqg2x" event={"ID":"604ca62b-98a7-4023-b6fc-de75724d84a9","Type":"ContainerDied","Data":"7f09835dd2cf6679726eb3d72b0180e5308fc3a10ebfba5a8337fb2c49213861"} Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.096291 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.287911 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-log-httpd\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363540 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-config-data\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363674 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-sg-core-conf-yaml\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363747 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvq2m\" (UniqueName: \"kubernetes.io/projected/78ebba06-604c-4fb6-91b1-3727324ca4a8-kube-api-access-rvq2m\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363793 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-combined-ca-bundle\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-scripts\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.363836 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-run-httpd\") pod \"78ebba06-604c-4fb6-91b1-3727324ca4a8\" (UID: \"78ebba06-604c-4fb6-91b1-3727324ca4a8\") " Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.364565 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.364759 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.372908 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-scripts" (OuterVolumeSpecName: "scripts") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.379974 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ebba06-604c-4fb6-91b1-3727324ca4a8-kube-api-access-rvq2m" (OuterVolumeSpecName: "kube-api-access-rvq2m") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "kube-api-access-rvq2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.456253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.465866 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.465909 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvq2m\" (UniqueName: \"kubernetes.io/projected/78ebba06-604c-4fb6-91b1-3727324ca4a8-kube-api-access-rvq2m\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.465919 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.465927 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.465935 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ebba06-604c-4fb6-91b1-3727324ca4a8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.575142 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8445cc88bf-cdsst" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9696/\": dial tcp 10.217.0.170:9696: connect: connection refused" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.609387 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.851527 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.861336 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5bc6776d6b-sktrk" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.880209 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.891711 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-config-data" (OuterVolumeSpecName: "config-data") pod "78ebba06-604c-4fb6-91b1-3727324ca4a8" (UID: "78ebba06-604c-4fb6-91b1-3727324ca4a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.961851 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dbff57d7b-c7kmz"] Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.962063 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dbff57d7b-c7kmz" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api-log" containerID="cri-o://a2ceda8fd39a91bac69fddf8087d5237ff44b350f39853603c6ecb66865b9008" gracePeriod=30 Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.962285 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dbff57d7b-c7kmz" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api" containerID="cri-o://58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af" gracePeriod=30 Feb 17 20:27:48 crc kubenswrapper[4793]: I0217 20:27:48.984109 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ebba06-604c-4fb6-91b1-3727324ca4a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.066065 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e37eff3-1014-4cfd-9015-f51d8d7c4026","Type":"ContainerStarted","Data":"4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38"} Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.069112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574778f449-sqg2x" event={"ID":"604ca62b-98a7-4023-b6fc-de75724d84a9","Type":"ContainerStarted","Data":"3c42d3e549ccd83488531b1e37faff192c9d17b62d69db2b3e7961d00b232379"} Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.069303 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.099209 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-574778f449-sqg2x" podStartSLOduration=4.099193298 podStartE2EDuration="4.099193298s" podCreationTimestamp="2026-02-17 20:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:49.094742867 +0000 UTC m=+1144.386441178" watchObservedRunningTime="2026-02-17 20:27:49.099193298 +0000 UTC m=+1144.390891609" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.106953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ebba06-604c-4fb6-91b1-3727324ca4a8","Type":"ContainerDied","Data":"51f22845a542142bbbf7600071ab8aab1e8c4edfce4b1fbf1e91c76863760137"} Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.107118 4793 scope.go:117] "RemoveContainer" containerID="0b108d7a98335cbc634b3b4a55abdaadd0c61ef66ff6641f85ef191c5af4ab2f" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.107352 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.122533 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75c8b5cf48-t8jmz" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.130037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a90865-6b9b-4232-b918-9830565d998a","Type":"ContainerStarted","Data":"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479"} Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.174071 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d8998fd7c-xvl9z" event={"ID":"c09fdef5-1d53-4792-84d2-3bb953383525","Type":"ContainerStarted","Data":"81e70c68de076a0633d5d0b2668c3d6c2b566f558f01f6e67bf5e353b00947d8"} Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.174107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d8998fd7c-xvl9z" event={"ID":"c09fdef5-1d53-4792-84d2-3bb953383525","Type":"ContainerStarted","Data":"4ed6459a97e8f031044da991fa06fdae38c54b3652825a8c0c6c9d600925b0a5"} Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.252862 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75d6fd885d-fw6ln"] Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.253548 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon-log" containerID="cri-o://6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a" gracePeriod=30 Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.254061 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" containerID="cri-o://3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1" gracePeriod=30 Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.268802 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.279275 4793 scope.go:117] "RemoveContainer" containerID="b6f01aab37c47f6d916d53fd22cf995d7a13582bffa3b3be3884a0cae6703a5b" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.283303 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.297779 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:27:49 crc kubenswrapper[4793]: E0217 20:27:49.298509 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="proxy-httpd" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298528 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="proxy-httpd" Feb 17 20:27:49 crc kubenswrapper[4793]: E0217 20:27:49.298560 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="sg-core" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298566 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="sg-core" Feb 17 20:27:49 crc kubenswrapper[4793]: E0217 20:27:49.298576 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-central-agent" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298582 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-central-agent" Feb 17 20:27:49 crc kubenswrapper[4793]: E0217 20:27:49.298596 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-notification-agent" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298608 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-notification-agent" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298793 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="sg-core" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298805 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="proxy-httpd" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298824 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-central-agent" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.298835 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" containerName="ceilometer-notification-agent" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.300878 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.303295 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.304033 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.308911 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.361751 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398068 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398141 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj9x\" (UniqueName: \"kubernetes.io/projected/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-kube-api-access-rnj9x\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398165 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-run-httpd\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398234 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398252 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-log-httpd\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398282 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-config-data\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.398313 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-scripts\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.450517 4793 scope.go:117] "RemoveContainer" containerID="7495d119d706da7a2adfc7cf6c4c76c9902e428d87f8b29733d405c14bc05da5" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500306 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-config-data\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500368 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-scripts\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500402 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500450 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj9x\" (UniqueName: \"kubernetes.io/projected/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-kube-api-access-rnj9x\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-run-httpd\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500544 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.500563 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-log-httpd\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.501356 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-log-httpd\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.501644 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-run-httpd\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.510438 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.520327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.522951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-config-data\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.523393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-scripts\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.546843 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj9x\" (UniqueName: \"kubernetes.io/projected/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-kube-api-access-rnj9x\") pod \"ceilometer-0\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.584409 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ebba06-604c-4fb6-91b1-3727324ca4a8" path="/var/lib/kubelet/pods/78ebba06-604c-4fb6-91b1-3727324ca4a8/volumes" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.673914 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:27:49 crc kubenswrapper[4793]: I0217 20:27:49.689224 4793 scope.go:117] "RemoveContainer" containerID="9c17f4a250e3a88160a2598e66ca6aa415b0fb32eb6a82e88616383f38ee179e" Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.247976 4793 generic.go:334] "Generic (PLEG): container finished" podID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerID="a2ceda8fd39a91bac69fddf8087d5237ff44b350f39853603c6ecb66865b9008" exitCode=143 Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.248345 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dbff57d7b-c7kmz" event={"ID":"347b7ec4-6cfe-431e-b6b4-70c0933118c6","Type":"ContainerDied","Data":"a2ceda8fd39a91bac69fddf8087d5237ff44b350f39853603c6ecb66865b9008"} Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.262878 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d8998fd7c-xvl9z" event={"ID":"c09fdef5-1d53-4792-84d2-3bb953383525","Type":"ContainerStarted","Data":"c6f7744184d0b1cd331126ae866011740ca5632b97e848e694c0bc5e612b60de"} Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.262939 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.299037 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d8998fd7c-xvl9z" podStartSLOduration=4.299017403 podStartE2EDuration="4.299017403s" podCreationTimestamp="2026-02-17 20:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:50.279818236 +0000 UTC m=+1145.571516547" watchObservedRunningTime="2026-02-17 20:27:50.299017403 +0000 UTC m=+1145.590715714" Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.374843 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:27:50 crc kubenswrapper[4793]: I0217 20:27:50.604729 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.248846 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:58140->10.217.0.164:8443: read: connection reset by peer" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.249851 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.315145 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerStarted","Data":"ff8fe10db732c8397a86719c045ee5a5f862a92648c34444e80104140d266fb6"} Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.315186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerStarted","Data":"5912e1925a3d88771fb7d9bbf18f20dadf9cf5a29df3363f9b8140dcb8f0dde9"} Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.345043 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353547 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-httpd-config\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353637 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-public-tls-certs\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353721 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-config\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353769 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-combined-ca-bundle\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353794 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-internal-tls-certs\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353843 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnbz\" (UniqueName: \"kubernetes.io/projected/24893fdf-f7bb-4be7-b5f9-edde49088bbe-kube-api-access-psnbz\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.353877 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-ovndb-tls-certs\") pod \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\" (UID: \"24893fdf-f7bb-4be7-b5f9-edde49088bbe\") " Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.375015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e37eff3-1014-4cfd-9015-f51d8d7c4026","Type":"ContainerStarted","Data":"b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff"} Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.381300 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.389660 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24893fdf-f7bb-4be7-b5f9-edde49088bbe-kube-api-access-psnbz" (OuterVolumeSpecName: "kube-api-access-psnbz") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "kube-api-access-psnbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.447075 4793 generic.go:334] "Generic (PLEG): container finished" podID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerID="76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957" exitCode=0 Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.447675 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8445cc88bf-cdsst" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.448916 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445cc88bf-cdsst" event={"ID":"24893fdf-f7bb-4be7-b5f9-edde49088bbe","Type":"ContainerDied","Data":"76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957"} Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.449051 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445cc88bf-cdsst" event={"ID":"24893fdf-f7bb-4be7-b5f9-edde49088bbe","Type":"ContainerDied","Data":"7b69a363227f952d90e9864c71933bca593bc1f83e05b405f30a71ceffae29a1"} Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.449138 4793 scope.go:117] "RemoveContainer" containerID="df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.453004 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api-log" containerID="cri-o://c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479" gracePeriod=30 Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.453355 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api" containerID="cri-o://0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824" gracePeriod=30 Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.453906 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.828875548 podStartE2EDuration="6.45388704s" podCreationTimestamp="2026-02-17 20:27:45 +0000 UTC" firstStartedPulling="2026-02-17 20:27:46.536853456 +0000 UTC m=+1141.828551767" lastFinishedPulling="2026-02-17 20:27:47.161864948 +0000 UTC m=+1142.453563259" observedRunningTime="2026-02-17 20:27:51.448498097 +0000 UTC m=+1146.740196408" watchObservedRunningTime="2026-02-17 20:27:51.45388704 +0000 UTC m=+1146.745585351" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.454261 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a90865-6b9b-4232-b918-9830565d998a","Type":"ContainerStarted","Data":"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824"} Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.454288 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.455665 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psnbz\" (UniqueName: \"kubernetes.io/projected/24893fdf-f7bb-4be7-b5f9-edde49088bbe-kube-api-access-psnbz\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.455810 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.531162 4793 scope.go:117] "RemoveContainer" containerID="76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.648857 4793 scope.go:117] "RemoveContainer" containerID="df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a" Feb 17 20:27:51 crc kubenswrapper[4793]: E0217 20:27:51.668312 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a\": container with ID starting with df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a not found: ID does not exist" containerID="df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.668353 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a"} err="failed to get container status \"df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a\": rpc error: code = NotFound desc = could not find container \"df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a\": container with ID starting with df52dd790aa5a28c2b26fd4fc121472751ef6e36b499fdc38d69221bb3b2b15a not found: ID does not exist" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.668379 4793 scope.go:117] "RemoveContainer" containerID="76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957" Feb 17 20:27:51 crc kubenswrapper[4793]: E0217 20:27:51.668835 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957\": container with ID starting with 76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957 not found: ID does not exist" containerID="76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.668876 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957"} err="failed to get container status \"76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957\": rpc error: code = NotFound desc = could not find container \"76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957\": container with ID starting with 76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957 not found: ID does not exist" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.741181 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.802146 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.807875 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.812731 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.818845 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-config" (OuterVolumeSpecName: "config") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.842868 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "24893fdf-f7bb-4be7-b5f9-edde49088bbe" (UID: "24893fdf-f7bb-4be7-b5f9-edde49088bbe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.900340 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.901285 4793 scope.go:117] "RemoveContainer" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.904173 4793 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.904194 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.904203 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.904212 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24893fdf-f7bb-4be7-b5f9-edde49088bbe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.935363 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.935344254 podStartE2EDuration="6.935344254s" podCreationTimestamp="2026-02-17 20:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:51.516077476 +0000 UTC m=+1146.807775787" watchObservedRunningTime="2026-02-17 20:27:51.935344254 +0000 UTC m=+1147.227042565" Feb 17 20:27:51 crc kubenswrapper[4793]: I0217 20:27:51.963131 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:27:51 crc kubenswrapper[4793]: E0217 20:27:51.963770 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d is running failed: container process not found" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 20:27:51 crc kubenswrapper[4793]: E0217 20:27:51.964213 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d is running failed: container process not found" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 20:27:51 crc kubenswrapper[4793]: E0217 20:27:51.964468 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d is running failed: container process not found" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 20:27:51 crc kubenswrapper[4793]: E0217 20:27:51.964504 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d is running failed: container process not found" probeType="Startup" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.445648 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.448076 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8445cc88bf-cdsst"] Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.457340 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8445cc88bf-cdsst"] Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.480149 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" exitCode=1 Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.480314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.480353 4793 scope.go:117] "RemoveContainer" containerID="a9952a3fe14e8b345f3d22c7384c83944f30d42e627058ed70589fdf0cc7470e" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.481367 4793 scope.go:117] "RemoveContainer" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.481951 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.489853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerStarted","Data":"25d7ecb3ef45a366adaa06b2f009b18b477a7e84dbc6fa2a4add9307c6df357d"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.491435 4793 generic.go:334] "Generic (PLEG): container finished" podID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerID="3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1" exitCode=0 Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.491475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75d6fd885d-fw6ln" event={"ID":"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf","Type":"ContainerDied","Data":"3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.502681 4793 generic.go:334] "Generic (PLEG): container finished" podID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerID="58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af" exitCode=0 Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.502756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dbff57d7b-c7kmz" event={"ID":"347b7ec4-6cfe-431e-b6b4-70c0933118c6","Type":"ContainerDied","Data":"58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.513127 4793 generic.go:334] "Generic (PLEG): container finished" podID="e2a90865-6b9b-4232-b918-9830565d998a" containerID="0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824" exitCode=0 Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.513538 4793 generic.go:334] "Generic (PLEG): container finished" podID="e2a90865-6b9b-4232-b918-9830565d998a" containerID="c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479" exitCode=143 Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.513794 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a90865-6b9b-4232-b918-9830565d998a","Type":"ContainerDied","Data":"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.513842 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a90865-6b9b-4232-b918-9830565d998a","Type":"ContainerDied","Data":"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.513855 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2a90865-6b9b-4232-b918-9830565d998a","Type":"ContainerDied","Data":"ce0a26d1ef791edbb76927efb33b02d80aab1bacab669476f1532c281d9903a9"} Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.514734 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.539219 4793 scope.go:117] "RemoveContainer" containerID="0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.588643 4793 scope.go:117] "RemoveContainer" containerID="c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.614940 4793 scope.go:117] "RemoveContainer" containerID="0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.616844 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824\": container with ID starting with 0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824 not found: ID does not exist" containerID="0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.616874 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824"} err="failed to get container status \"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824\": rpc error: code = NotFound desc = could not find container \"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824\": container with ID starting with 0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824 not found: ID does not exist" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.616897 4793 scope.go:117] "RemoveContainer" containerID="c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.617396 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479\": container with ID starting with c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479 not found: ID does not exist" containerID="c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.617451 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479"} err="failed to get container status \"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479\": rpc error: code = NotFound desc = could not find container \"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479\": container with ID starting with c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479 not found: ID does not exist" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.617479 4793 scope.go:117] "RemoveContainer" containerID="0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.619038 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824"} err="failed to get container status \"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824\": rpc error: code = NotFound desc = could not find container \"0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824\": container with ID starting with 0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824 not found: ID does not exist" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.619084 4793 scope.go:117] "RemoveContainer" containerID="c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.619400 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479"} err="failed to get container status \"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479\": rpc error: code = NotFound desc = could not find container \"c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479\": container with ID starting with c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479 not found: ID does not exist" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638156 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqm64\" (UniqueName: \"kubernetes.io/projected/e2a90865-6b9b-4232-b918-9830565d998a-kube-api-access-wqm64\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638234 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a90865-6b9b-4232-b918-9830565d998a-etc-machine-id\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638314 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data-custom\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638398 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a90865-6b9b-4232-b918-9830565d998a-logs\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638429 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-combined-ca-bundle\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.638480 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-scripts\") pod \"e2a90865-6b9b-4232-b918-9830565d998a\" (UID: \"e2a90865-6b9b-4232-b918-9830565d998a\") " Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.649027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2a90865-6b9b-4232-b918-9830565d998a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.651817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-scripts" (OuterVolumeSpecName: "scripts") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.653246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a90865-6b9b-4232-b918-9830565d998a-logs" (OuterVolumeSpecName: "logs") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.653983 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a90865-6b9b-4232-b918-9830565d998a-kube-api-access-wqm64" (OuterVolumeSpecName: "kube-api-access-wqm64") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "kube-api-access-wqm64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.656813 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.719260 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.741191 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.741248 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqm64\" (UniqueName: \"kubernetes.io/projected/e2a90865-6b9b-4232-b918-9830565d998a-kube-api-access-wqm64\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.741258 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2a90865-6b9b-4232-b918-9830565d998a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.741266 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.741275 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a90865-6b9b-4232-b918-9830565d998a-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.741282 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.792885 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data" (OuterVolumeSpecName: "config-data") pod "e2a90865-6b9b-4232-b918-9830565d998a" (UID: "e2a90865-6b9b-4232-b918-9830565d998a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.843574 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a90865-6b9b-4232-b918-9830565d998a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.930647 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.961758 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.971666 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.979817 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.980272 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-httpd" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980285 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-httpd" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.980312 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-api" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980318 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-api" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.980331 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980337 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.980346 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api-log" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980351 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api-log" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.980362 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980368 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api" Feb 17 20:27:52 crc kubenswrapper[4793]: E0217 20:27:52.980386 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api-log" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980392 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api-log" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980557 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api-log" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980567 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-httpd" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980578 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" containerName="barbican-api" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980588 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" containerName="neutron-api" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980600 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.980611 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a90865-6b9b-4232-b918-9830565d998a" containerName="cinder-api-log" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.981888 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.990226 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.990407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 20:27:52 crc kubenswrapper[4793]: I0217 20:27:52.990988 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.003597 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.053392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data\") pod \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.053654 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data-custom\") pod \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.053704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-combined-ca-bundle\") pod \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.053818 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347b7ec4-6cfe-431e-b6b4-70c0933118c6-logs\") pod \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.053860 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4glm\" (UniqueName: \"kubernetes.io/projected/347b7ec4-6cfe-431e-b6b4-70c0933118c6-kube-api-access-v4glm\") pod \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\" (UID: \"347b7ec4-6cfe-431e-b6b4-70c0933118c6\") " Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.063155 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347b7ec4-6cfe-431e-b6b4-70c0933118c6-logs" (OuterVolumeSpecName: "logs") pod "347b7ec4-6cfe-431e-b6b4-70c0933118c6" (UID: "347b7ec4-6cfe-431e-b6b4-70c0933118c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.074805 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347b7ec4-6cfe-431e-b6b4-70c0933118c6-kube-api-access-v4glm" (OuterVolumeSpecName: "kube-api-access-v4glm") pod "347b7ec4-6cfe-431e-b6b4-70c0933118c6" (UID: "347b7ec4-6cfe-431e-b6b4-70c0933118c6"). InnerVolumeSpecName "kube-api-access-v4glm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.094991 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "347b7ec4-6cfe-431e-b6b4-70c0933118c6" (UID: "347b7ec4-6cfe-431e-b6b4-70c0933118c6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.108854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "347b7ec4-6cfe-431e-b6b4-70c0933118c6" (UID: "347b7ec4-6cfe-431e-b6b4-70c0933118c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.134855 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data" (OuterVolumeSpecName: "config-data") pod "347b7ec4-6cfe-431e-b6b4-70c0933118c6" (UID: "347b7ec4-6cfe-431e-b6b4-70c0933118c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155649 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-config-data\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155734 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-scripts\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155750 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-logs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155768 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6kt\" (UniqueName: \"kubernetes.io/projected/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-kube-api-access-jz6kt\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155885 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155906 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155926 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.155996 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.156006 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.156017 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347b7ec4-6cfe-431e-b6b4-70c0933118c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.156025 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347b7ec4-6cfe-431e-b6b4-70c0933118c6-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.156034 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4glm\" (UniqueName: \"kubernetes.io/projected/347b7ec4-6cfe-431e-b6b4-70c0933118c6-kube-api-access-v4glm\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257165 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-config-data\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257232 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-scripts\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257279 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-logs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6kt\" (UniqueName: \"kubernetes.io/projected/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-kube-api-access-jz6kt\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257430 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257449 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257562 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.257840 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-logs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.263051 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-scripts\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.266273 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-config-data-custom\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.266827 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-config-data\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.266920 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.268073 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.268341 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.276247 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6kt\" (UniqueName: \"kubernetes.io/projected/e9ebb7ca-2c47-4f64-81f5-04d26737ce44-kube-api-access-jz6kt\") pod \"cinder-api-0\" (UID: \"e9ebb7ca-2c47-4f64-81f5-04d26737ce44\") " pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.303106 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.553795 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24893fdf-f7bb-4be7-b5f9-edde49088bbe" path="/var/lib/kubelet/pods/24893fdf-f7bb-4be7-b5f9-edde49088bbe/volumes" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.554434 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a90865-6b9b-4232-b918-9830565d998a" path="/var/lib/kubelet/pods/e2a90865-6b9b-4232-b918-9830565d998a/volumes" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.566601 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerStarted","Data":"871aa8f1f17090de9e41cda325e6139ae24c7f96acd211bd51fe5596f79a88b7"} Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.582902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerStarted","Data":"0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700"} Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.600177 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dbff57d7b-c7kmz" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.600194 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dbff57d7b-c7kmz" event={"ID":"347b7ec4-6cfe-431e-b6b4-70c0933118c6","Type":"ContainerDied","Data":"8627d650d994bd22f858ee66947e21732ce07ed9d942bf203f3ec9b8a0a7eb77"} Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.600248 4793 scope.go:117] "RemoveContainer" containerID="58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.657855 4793 scope.go:117] "RemoveContainer" containerID="a2ceda8fd39a91bac69fddf8087d5237ff44b350f39853603c6ecb66865b9008" Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.657987 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dbff57d7b-c7kmz"] Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.666889 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dbff57d7b-c7kmz"] Feb 17 20:27:53 crc kubenswrapper[4793]: I0217 20:27:53.863814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 20:27:54 crc kubenswrapper[4793]: I0217 20:27:54.622580 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9ebb7ca-2c47-4f64-81f5-04d26737ce44","Type":"ContainerStarted","Data":"b7ae178c4f9e3a95d79f08808a0efe6deec6c69fd494c97c52b064d872f5e10e"} Feb 17 20:27:54 crc kubenswrapper[4793]: I0217 20:27:54.629825 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerStarted","Data":"1cc98e8700a103d9d9aa819172db439742bc981497a7d053695603c9b86fcb9f"} Feb 17 20:27:54 crc kubenswrapper[4793]: I0217 20:27:54.630203 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:27:54 crc kubenswrapper[4793]: I0217 20:27:54.660611 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3194576700000002 podStartE2EDuration="5.660588164s" podCreationTimestamp="2026-02-17 20:27:49 +0000 UTC" firstStartedPulling="2026-02-17 20:27:50.605473468 +0000 UTC m=+1145.897171779" lastFinishedPulling="2026-02-17 20:27:53.946603972 +0000 UTC m=+1149.238302273" observedRunningTime="2026-02-17 20:27:54.65561004 +0000 UTC m=+1149.947308351" watchObservedRunningTime="2026-02-17 20:27:54.660588164 +0000 UTC m=+1149.952286475" Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.553269 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347b7ec4-6cfe-431e-b6b4-70c0933118c6" path="/var/lib/kubelet/pods/347b7ec4-6cfe-431e-b6b4-70c0933118c6/volumes" Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.557874 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.671868 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.726343 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9ebb7ca-2c47-4f64-81f5-04d26737ce44","Type":"ContainerStarted","Data":"4a2aa386429bb655488ed19c23f2cbb20df8f618cbd3f64814ce05afa3e4f219"} Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.726382 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e9ebb7ca-2c47-4f64-81f5-04d26737ce44","Type":"ContainerStarted","Data":"042ae9bae704d634a0f1612362f755119908c3e6bd1669f1013e4baa6bba2258"} Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.726408 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.782797 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb4d97cd7-bl8nc"] Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.783382 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="dnsmasq-dns" containerID="cri-o://eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a" gracePeriod=10 Feb 17 20:27:55 crc kubenswrapper[4793]: I0217 20:27:55.791060 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.791041164 podStartE2EDuration="3.791041164s" podCreationTimestamp="2026-02-17 20:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:27:55.779098647 +0000 UTC m=+1151.070796958" watchObservedRunningTime="2026-02-17 20:27:55.791041164 +0000 UTC m=+1151.082739475" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.132222 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.253537 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.323952 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.398472 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-sb\") pod \"c5eb5318-a3af-4a2a-947f-57219f371d7e\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.398552 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-swift-storage-0\") pod \"c5eb5318-a3af-4a2a-947f-57219f371d7e\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.398804 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-svc\") pod \"c5eb5318-a3af-4a2a-947f-57219f371d7e\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.398897 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-config\") pod \"c5eb5318-a3af-4a2a-947f-57219f371d7e\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.398962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-nb\") pod \"c5eb5318-a3af-4a2a-947f-57219f371d7e\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.398995 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmfm9\" (UniqueName: \"kubernetes.io/projected/c5eb5318-a3af-4a2a-947f-57219f371d7e-kube-api-access-tmfm9\") pod \"c5eb5318-a3af-4a2a-947f-57219f371d7e\" (UID: \"c5eb5318-a3af-4a2a-947f-57219f371d7e\") " Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.409201 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5eb5318-a3af-4a2a-947f-57219f371d7e-kube-api-access-tmfm9" (OuterVolumeSpecName: "kube-api-access-tmfm9") pod "c5eb5318-a3af-4a2a-947f-57219f371d7e" (UID: "c5eb5318-a3af-4a2a-947f-57219f371d7e"). InnerVolumeSpecName "kube-api-access-tmfm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.486971 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-config" (OuterVolumeSpecName: "config") pod "c5eb5318-a3af-4a2a-947f-57219f371d7e" (UID: "c5eb5318-a3af-4a2a-947f-57219f371d7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.493556 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5eb5318-a3af-4a2a-947f-57219f371d7e" (UID: "c5eb5318-a3af-4a2a-947f-57219f371d7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.500899 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.500927 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.500937 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmfm9\" (UniqueName: \"kubernetes.io/projected/c5eb5318-a3af-4a2a-947f-57219f371d7e-kube-api-access-tmfm9\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.504545 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5eb5318-a3af-4a2a-947f-57219f371d7e" (UID: "c5eb5318-a3af-4a2a-947f-57219f371d7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.512581 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5eb5318-a3af-4a2a-947f-57219f371d7e" (UID: "c5eb5318-a3af-4a2a-947f-57219f371d7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.548919 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5eb5318-a3af-4a2a-947f-57219f371d7e" (UID: "c5eb5318-a3af-4a2a-947f-57219f371d7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.603782 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.603807 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.603816 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5eb5318-a3af-4a2a-947f-57219f371d7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.735646 4793 generic.go:334] "Generic (PLEG): container finished" podID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerID="eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a" exitCode=0 Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.736619 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.736644 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" event={"ID":"c5eb5318-a3af-4a2a-947f-57219f371d7e","Type":"ContainerDied","Data":"eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a"} Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.737274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" event={"ID":"c5eb5318-a3af-4a2a-947f-57219f371d7e","Type":"ContainerDied","Data":"ad48dc658b1c591f4d7f31386c24fc120fd3b399e7b2a35fb4df0186eec6ee8a"} Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.737317 4793 scope.go:117] "RemoveContainer" containerID="eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.739065 4793 generic.go:334] "Generic (PLEG): container finished" podID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" exitCode=1 Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.739475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerDied","Data":"0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700"} Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.739943 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.740108 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="cinder-scheduler" containerID="cri-o://4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38" gracePeriod=30 Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.740192 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="probe" containerID="cri-o://b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff" gracePeriod=30 Feb 17 20:27:56 crc kubenswrapper[4793]: E0217 20:27:56.740349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(aedc67e8-05ec-44a4-b1f2-a18d2fde80b2)\"" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.772375 4793 scope.go:117] "RemoveContainer" containerID="80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.786630 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb4d97cd7-bl8nc"] Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.795386 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb4d97cd7-bl8nc"] Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.798711 4793 scope.go:117] "RemoveContainer" containerID="eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a" Feb 17 20:27:56 crc kubenswrapper[4793]: E0217 20:27:56.801042 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a\": container with ID starting with eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a not found: ID does not exist" containerID="eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.801088 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a"} err="failed to get container status \"eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a\": rpc error: code = NotFound desc = could not find container \"eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a\": container with ID starting with eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a not found: ID does not exist" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.801107 4793 scope.go:117] "RemoveContainer" containerID="80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544" Feb 17 20:27:56 crc kubenswrapper[4793]: E0217 20:27:56.801313 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544\": container with ID starting with 80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544 not found: ID does not exist" containerID="80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.801332 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544"} err="failed to get container status \"80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544\": rpc error: code = NotFound desc = could not find container \"80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544\": container with ID starting with 80ecfeae7dd4fbef21c325d596792dfefa07a2a49e4a82d8ed50a84fc9ff5544 not found: ID does not exist" Feb 17 20:27:56 crc kubenswrapper[4793]: I0217 20:27:56.801348 4793 scope.go:117] "RemoveContainer" containerID="dbb337f685980e2c64197a2c717a3b2edd7c5078beac0a5e37c6e61d8dc72ef6" Feb 17 20:27:57 crc kubenswrapper[4793]: I0217 20:27:57.549910 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" path="/var/lib/kubelet/pods/c5eb5318-a3af-4a2a-947f-57219f371d7e/volumes" Feb 17 20:27:57 crc kubenswrapper[4793]: I0217 20:27:57.802902 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerID="b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff" exitCode=0 Feb 17 20:27:57 crc kubenswrapper[4793]: I0217 20:27:57.803195 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerID="4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38" exitCode=0 Feb 17 20:27:57 crc kubenswrapper[4793]: I0217 20:27:57.803239 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e37eff3-1014-4cfd-9015-f51d8d7c4026","Type":"ContainerDied","Data":"b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff"} Feb 17 20:27:57 crc kubenswrapper[4793]: I0217 20:27:57.803267 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e37eff3-1014-4cfd-9015-f51d8d7c4026","Type":"ContainerDied","Data":"4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38"} Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.276762 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.445625 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data\") pod \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.446018 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data-custom\") pod \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.446074 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8dkq\" (UniqueName: \"kubernetes.io/projected/6e37eff3-1014-4cfd-9015-f51d8d7c4026-kube-api-access-q8dkq\") pod \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.446139 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-scripts\") pod \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.446182 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e37eff3-1014-4cfd-9015-f51d8d7c4026-etc-machine-id\") pod \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.446282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-combined-ca-bundle\") pod \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\" (UID: \"6e37eff3-1014-4cfd-9015-f51d8d7c4026\") " Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.447216 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e37eff3-1014-4cfd-9015-f51d8d7c4026-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e37eff3-1014-4cfd-9015-f51d8d7c4026" (UID: "6e37eff3-1014-4cfd-9015-f51d8d7c4026"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.455113 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e37eff3-1014-4cfd-9015-f51d8d7c4026-kube-api-access-q8dkq" (OuterVolumeSpecName: "kube-api-access-q8dkq") pod "6e37eff3-1014-4cfd-9015-f51d8d7c4026" (UID: "6e37eff3-1014-4cfd-9015-f51d8d7c4026"). InnerVolumeSpecName "kube-api-access-q8dkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.461850 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e37eff3-1014-4cfd-9015-f51d8d7c4026" (UID: "6e37eff3-1014-4cfd-9015-f51d8d7c4026"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.475062 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-scripts" (OuterVolumeSpecName: "scripts") pod "6e37eff3-1014-4cfd-9015-f51d8d7c4026" (UID: "6e37eff3-1014-4cfd-9015-f51d8d7c4026"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.527544 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e37eff3-1014-4cfd-9015-f51d8d7c4026" (UID: "6e37eff3-1014-4cfd-9015-f51d8d7c4026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.548627 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.548667 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.548680 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8dkq\" (UniqueName: \"kubernetes.io/projected/6e37eff3-1014-4cfd-9015-f51d8d7c4026-kube-api-access-q8dkq\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.548719 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.548731 4793 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e37eff3-1014-4cfd-9015-f51d8d7c4026-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.586144 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data" (OuterVolumeSpecName: "config-data") pod "6e37eff3-1014-4cfd-9015-f51d8d7c4026" (UID: "6e37eff3-1014-4cfd-9015-f51d8d7c4026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.650315 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e37eff3-1014-4cfd-9015-f51d8d7c4026-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.852080 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e37eff3-1014-4cfd-9015-f51d8d7c4026","Type":"ContainerDied","Data":"7c96282d3c3074442255879366aa474b42ad3d31469b8e46de177b1f58947f97"} Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.852417 4793 scope.go:117] "RemoveContainer" containerID="b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.852116 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.886945 4793 scope.go:117] "RemoveContainer" containerID="4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.906236 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.927199 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935031 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:58 crc kubenswrapper[4793]: E0217 20:27:58.935526 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="dnsmasq-dns" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935547 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="dnsmasq-dns" Feb 17 20:27:58 crc kubenswrapper[4793]: E0217 20:27:58.935569 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="init" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935578 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="init" Feb 17 20:27:58 crc kubenswrapper[4793]: E0217 20:27:58.935605 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="cinder-scheduler" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935612 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="cinder-scheduler" Feb 17 20:27:58 crc kubenswrapper[4793]: E0217 20:27:58.935633 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="probe" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935640 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="probe" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935897 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="probe" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935923 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" containerName="cinder-scheduler" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.935937 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="dnsmasq-dns" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.937192 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.939528 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 20:27:58 crc kubenswrapper[4793]: I0217 20:27:58.947683 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.062433 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-config-data\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.062573 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92nf\" (UniqueName: \"kubernetes.io/projected/db029fb2-d204-4d1b-81c8-227c3b8d4a39-kube-api-access-n92nf\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.062610 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.062648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db029fb2-d204-4d1b-81c8-227c3b8d4a39-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.062738 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-scripts\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.062767 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.164519 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db029fb2-d204-4d1b-81c8-227c3b8d4a39-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.164633 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-scripts\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.164664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.164722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-config-data\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.164819 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92nf\" (UniqueName: \"kubernetes.io/projected/db029fb2-d204-4d1b-81c8-227c3b8d4a39-kube-api-access-n92nf\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.164852 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.165368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db029fb2-d204-4d1b-81c8-227c3b8d4a39-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.168807 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-config-data\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.177232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-scripts\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.179020 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.186343 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db029fb2-d204-4d1b-81c8-227c3b8d4a39-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.188441 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92nf\" (UniqueName: \"kubernetes.io/projected/db029fb2-d204-4d1b-81c8-227c3b8d4a39-kube-api-access-n92nf\") pod \"cinder-scheduler-0\" (UID: \"db029fb2-d204-4d1b-81c8-227c3b8d4a39\") " pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.263513 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.550142 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e37eff3-1014-4cfd-9015-f51d8d7c4026" path="/var/lib/kubelet/pods/6e37eff3-1014-4cfd-9015-f51d8d7c4026/volumes" Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.700115 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 20:27:59 crc kubenswrapper[4793]: I0217 20:27:59.872512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db029fb2-d204-4d1b-81c8-227c3b8d4a39","Type":"ContainerStarted","Data":"7327d6d9cc61284cefaaea757e6f485e1f584a25e33ee8d4ddfc583c9d702b6c"} Feb 17 20:28:00 crc kubenswrapper[4793]: I0217 20:28:00.888187 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db029fb2-d204-4d1b-81c8-227c3b8d4a39","Type":"ContainerStarted","Data":"50c2189912075801f8990b1bd24b50c3574990616ade7f5e5507499f8d85f05b"} Feb 17 20:28:00 crc kubenswrapper[4793]: I0217 20:28:00.888600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"db029fb2-d204-4d1b-81c8-227c3b8d4a39","Type":"ContainerStarted","Data":"5ecf6c340f72f7180990f4b29221fec50cd52d899c24f4a904c4317fc1f9bd4d"} Feb 17 20:28:00 crc kubenswrapper[4793]: I0217 20:28:00.916209 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.91619394 podStartE2EDuration="2.91619394s" podCreationTimestamp="2026-02-17 20:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:00.914798655 +0000 UTC m=+1156.206496966" watchObservedRunningTime="2026-02-17 20:28:00.91619394 +0000 UTC m=+1156.207892251" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.070629 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cb4d97cd7-bl8nc" podUID="c5eb5318-a3af-4a2a-947f-57219f371d7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.137371 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.900974 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.901373 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:01 crc kubenswrapper[4793]: E0217 20:28:01.901579 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(aedc67e8-05ec-44a4-b1f2-a18d2fde80b2)\"" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.902410 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.902506 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.902564 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.963105 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.963581 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:28:01 crc kubenswrapper[4793]: I0217 20:28:01.965631 4793 scope.go:117] "RemoveContainer" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" Feb 17 20:28:01 crc kubenswrapper[4793]: E0217 20:28:01.966768 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:28:02 crc kubenswrapper[4793]: I0217 20:28:02.907270 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:02 crc kubenswrapper[4793]: E0217 20:28:02.907518 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(aedc67e8-05ec-44a4-b1f2-a18d2fde80b2)\"" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.363711 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.365805 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.769088 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d5f5ff8-cl56f"] Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.778347 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.794628 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d5f5ff8-cl56f"] Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.863707 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjb8\" (UniqueName: \"kubernetes.io/projected/7a179868-08d0-4c4f-8503-ce054a68e5ee-kube-api-access-vgjb8\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.863850 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a179868-08d0-4c4f-8503-ce054a68e5ee-logs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.864145 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-combined-ca-bundle\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.864202 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-config-data\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.864489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-internal-tls-certs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.864537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-public-tls-certs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.864584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-scripts\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.915525 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:03 crc kubenswrapper[4793]: E0217 20:28:03.915862 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(aedc67e8-05ec-44a4-b1f2-a18d2fde80b2)\"" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.965944 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-internal-tls-certs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966024 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-public-tls-certs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966055 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-scripts\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966127 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjb8\" (UniqueName: \"kubernetes.io/projected/7a179868-08d0-4c4f-8503-ce054a68e5ee-kube-api-access-vgjb8\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966179 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a179868-08d0-4c4f-8503-ce054a68e5ee-logs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966275 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-combined-ca-bundle\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966297 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-config-data\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.966832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a179868-08d0-4c4f-8503-ce054a68e5ee-logs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.972305 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-config-data\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.972384 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-scripts\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.972753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-public-tls-certs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.979023 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-combined-ca-bundle\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.984132 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a179868-08d0-4c4f-8503-ce054a68e5ee-internal-tls-certs\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:03 crc kubenswrapper[4793]: I0217 20:28:03.984168 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjb8\" (UniqueName: \"kubernetes.io/projected/7a179868-08d0-4c4f-8503-ce054a68e5ee-kube-api-access-vgjb8\") pod \"placement-7d5f5ff8-cl56f\" (UID: \"7a179868-08d0-4c4f-8503-ce054a68e5ee\") " pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:04 crc kubenswrapper[4793]: I0217 20:28:04.093480 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:04 crc kubenswrapper[4793]: I0217 20:28:04.263736 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 20:28:04 crc kubenswrapper[4793]: I0217 20:28:04.614561 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d5f5ff8-cl56f"] Feb 17 20:28:04 crc kubenswrapper[4793]: I0217 20:28:04.773095 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79779c64b9-54jgr" Feb 17 20:28:04 crc kubenswrapper[4793]: I0217 20:28:04.928093 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d5f5ff8-cl56f" event={"ID":"7a179868-08d0-4c4f-8503-ce054a68e5ee","Type":"ContainerStarted","Data":"a115f8aa8c057c7ca625cdf2f5d614db2c0882133404bc436d792dab1d1cd3ad"} Feb 17 20:28:04 crc kubenswrapper[4793]: I0217 20:28:04.928135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d5f5ff8-cl56f" event={"ID":"7a179868-08d0-4c4f-8503-ce054a68e5ee","Type":"ContainerStarted","Data":"0a513250128004ab39abe43498eb4308291b4b9f4b274857363b672638a89458"} Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.753083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.766954 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.768456 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.770741 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.770774 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.770746 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-w7sn5" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.774050 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.901755 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.901826 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78kj\" (UniqueName: \"kubernetes.io/projected/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-kube-api-access-x78kj\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.901944 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-openstack-config\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.902064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.940894 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d5f5ff8-cl56f" event={"ID":"7a179868-08d0-4c4f-8503-ce054a68e5ee","Type":"ContainerStarted","Data":"375408f00dfa04a329d83c222a90987839e61b3469d70f2509867de10c6a8373"} Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.942349 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.942388 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:05 crc kubenswrapper[4793]: I0217 20:28:05.963443 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d5f5ff8-cl56f" podStartSLOduration=2.96342436 podStartE2EDuration="2.96342436s" podCreationTimestamp="2026-02-17 20:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:05.962736082 +0000 UTC m=+1161.254434393" watchObservedRunningTime="2026-02-17 20:28:05.96342436 +0000 UTC m=+1161.255122671" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.003791 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.003948 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.003983 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78kj\" (UniqueName: \"kubernetes.io/projected/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-kube-api-access-x78kj\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.004036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-openstack-config\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.005181 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-openstack-config\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.012193 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.014275 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.024155 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78kj\" (UniqueName: \"kubernetes.io/projected/3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4-kube-api-access-x78kj\") pod \"openstackclient\" (UID: \"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4\") " pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.096135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.730705 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 20:28:06 crc kubenswrapper[4793]: I0217 20:28:06.955405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4","Type":"ContainerStarted","Data":"dfece86d70fe6b35e8f0c51a5f1a31a82eb8f19d87778295cd69fe12323884de"} Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.382773 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-conmon-4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-conmon-4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.382884 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-4e50b612f4ba1545ac79c4bf2a90195df4f5d101489a70126279c81c50ba7f38.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.382906 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-conmon-c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-conmon-c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.382921 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-c4191ec2838539e744807bc9ccf2efa2386a47f669bd74fe585ad85dfdc80479.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.387733 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-conmon-0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-conmon-0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.387783 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-0d098a96d17127d2628a57d35f8821d4fbaec500dc41954b56f461e7dd479824.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.387804 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-conmon-b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-conmon-b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.387824 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-b6d6a411f11ee3cc1bd4c9fa6ac6e9fa7acd827ab82fac3c29965bacb06e73ff.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.395815 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaedc67e8_05ec_44a4_b1f2_a18d2fde80b2.slice/crio-conmon-0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaedc67e8_05ec_44a4_b1f2_a18d2fde80b2.slice/crio-conmon-0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: W0217 20:28:07.397766 4793 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaedc67e8_05ec_44a4_b1f2_a18d2fde80b2.slice/crio-0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaedc67e8_05ec_44a4_b1f2_a18d2fde80b2.slice/crio-0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700.scope: no such file or directory Feb 17 20:28:07 crc kubenswrapper[4793]: E0217 20:28:07.624932 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c302c0d_5edb_4f33_b2ff_7f31bd9c13bf.slice/crio-conmon-3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24893fdf_f7bb_4be7_b5f9_edde49088bbe.slice/crio-conmon-76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ecbc8e_aa9f_4025_883d_65e4c000d986.slice/crio-482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24893fdf_f7bb_4be7_b5f9_edde49088bbe.slice/crio-7b69a363227f952d90e9864c71933bca593bc1f83e05b405f30a71ceffae29a1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a90865_6b9b_4232_b918_9830565d998a.slice/crio-ce0a26d1ef791edbb76927efb33b02d80aab1bacab669476f1532c281d9903a9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5eb5318_a3af_4a2a_947f_57219f371d7e.slice/crio-conmon-eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6290a12_a807_46af_adaa_bafaf7bbb26f.slice/crio-conmon-e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347b7ec4_6cfe_431e_b6b4_70c0933118c6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5eb5318_a3af_4a2a_947f_57219f371d7e.slice/crio-ad48dc658b1c591f4d7f31386c24fc120fd3b399e7b2a35fb4df0186eec6ee8a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24893fdf_f7bb_4be7_b5f9_edde49088bbe.slice/crio-76b4aaa5ed1a0e6bb4740f9b7e2beadf425438ab57e7cf9c09065806132b1957.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347b7ec4_6cfe_431e_b6b4_70c0933118c6.slice/crio-58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e37eff3_1014_4cfd_9015_f51d8d7c4026.slice/crio-7c96282d3c3074442255879366aa474b42ad3d31469b8e46de177b1f58947f97\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ecbc8e_aa9f_4025_883d_65e4c000d986.slice/crio-conmon-482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5eb5318_a3af_4a2a_947f_57219f371d7e.slice/crio-eef0183dc08a298bf1badff48fe70f8e917127fa41e393ba131cd0723ff56a5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347b7ec4_6cfe_431e_b6b4_70c0933118c6.slice/crio-conmon-58c04913be025017b0e008ceb1a089cd3097d9b92638bb6fecb973a23a8c18af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod347b7ec4_6cfe_431e_b6b4_70c0933118c6.slice/crio-8627d650d994bd22f858ee66947e21732ce07ed9d942bf203f3ec9b8a0a7eb77\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6290a12_a807_46af_adaa_bafaf7bbb26f.slice/crio-e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c302c0d_5edb_4f33_b2ff_7f31bd9c13bf.slice/crio-3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24893fdf_f7bb_4be7_b5f9_edde49088bbe.slice\": RecentStats: unable to find data in memory cache]" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.874336 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.952529 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data-custom\") pod \"c6290a12-a807-46af-adaa-bafaf7bbb26f\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.952702 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6290a12-a807-46af-adaa-bafaf7bbb26f-logs\") pod \"c6290a12-a807-46af-adaa-bafaf7bbb26f\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.952870 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-combined-ca-bundle\") pod \"c6290a12-a807-46af-adaa-bafaf7bbb26f\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.952953 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data\") pod \"c6290a12-a807-46af-adaa-bafaf7bbb26f\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.952995 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pdz\" (UniqueName: \"kubernetes.io/projected/c6290a12-a807-46af-adaa-bafaf7bbb26f-kube-api-access-t9pdz\") pod \"c6290a12-a807-46af-adaa-bafaf7bbb26f\" (UID: \"c6290a12-a807-46af-adaa-bafaf7bbb26f\") " Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.953839 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6290a12-a807-46af-adaa-bafaf7bbb26f-logs" (OuterVolumeSpecName: "logs") pod "c6290a12-a807-46af-adaa-bafaf7bbb26f" (UID: "c6290a12-a807-46af-adaa-bafaf7bbb26f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.955437 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6290a12-a807-46af-adaa-bafaf7bbb26f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.964963 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6290a12-a807-46af-adaa-bafaf7bbb26f" (UID: "c6290a12-a807-46af-adaa-bafaf7bbb26f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.977957 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6290a12-a807-46af-adaa-bafaf7bbb26f-kube-api-access-t9pdz" (OuterVolumeSpecName: "kube-api-access-t9pdz") pod "c6290a12-a807-46af-adaa-bafaf7bbb26f" (UID: "c6290a12-a807-46af-adaa-bafaf7bbb26f"). InnerVolumeSpecName "kube-api-access-t9pdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.992830 4793 generic.go:334] "Generic (PLEG): container finished" podID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerID="e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7" exitCode=137 Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.993051 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6764d4c5d4-r2vt4" event={"ID":"c6290a12-a807-46af-adaa-bafaf7bbb26f","Type":"ContainerDied","Data":"e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7"} Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.993624 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6764d4c5d4-r2vt4" event={"ID":"c6290a12-a807-46af-adaa-bafaf7bbb26f","Type":"ContainerDied","Data":"a42587fd4c4f39e7c65fa6eb8575e2b516b0d424776d7588a1d52321e37deda0"} Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.993153 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6764d4c5d4-r2vt4" Feb 17 20:28:07 crc kubenswrapper[4793]: I0217 20:28:07.993672 4793 scope.go:117] "RemoveContainer" containerID="e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.014579 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6290a12-a807-46af-adaa-bafaf7bbb26f" (UID: "c6290a12-a807-46af-adaa-bafaf7bbb26f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.049893 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data" (OuterVolumeSpecName: "config-data") pod "c6290a12-a807-46af-adaa-bafaf7bbb26f" (UID: "c6290a12-a807-46af-adaa-bafaf7bbb26f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.059216 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.059252 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.059261 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pdz\" (UniqueName: \"kubernetes.io/projected/c6290a12-a807-46af-adaa-bafaf7bbb26f-kube-api-access-t9pdz\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.059271 4793 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6290a12-a807-46af-adaa-bafaf7bbb26f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.080551 4793 scope.go:117] "RemoveContainer" containerID="aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.107147 4793 scope.go:117] "RemoveContainer" containerID="e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7" Feb 17 20:28:08 crc kubenswrapper[4793]: E0217 20:28:08.108112 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7\": container with ID starting with e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7 not found: ID does not exist" containerID="e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.108149 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7"} err="failed to get container status \"e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7\": rpc error: code = NotFound desc = could not find container \"e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7\": container with ID starting with e20c606bae11a7626db494ea0f2e2bcd1804adcc3243d4297aa46524a89f2cc7 not found: ID does not exist" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.108176 4793 scope.go:117] "RemoveContainer" containerID="aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe" Feb 17 20:28:08 crc kubenswrapper[4793]: E0217 20:28:08.109631 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe\": container with ID starting with aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe not found: ID does not exist" containerID="aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.109667 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe"} err="failed to get container status \"aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe\": rpc error: code = NotFound desc = could not find container \"aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe\": container with ID starting with aaacb5c388538b7e5d9d89008300722c1b87c8c1d0f8914b510ab38d8845c0fe not found: ID does not exist" Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.337717 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6764d4c5d4-r2vt4"] Feb 17 20:28:08 crc kubenswrapper[4793]: I0217 20:28:08.347362 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6764d4c5d4-r2vt4"] Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.472412 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.551212 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" path="/var/lib/kubelet/pods/c6290a12-a807-46af-adaa-bafaf7bbb26f/volumes" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.879636 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f79fc46bf-72hgv"] Feb 17 20:28:09 crc kubenswrapper[4793]: E0217 20:28:09.880120 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.880139 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api" Feb 17 20:28:09 crc kubenswrapper[4793]: E0217 20:28:09.880148 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api-log" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.880156 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api-log" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.880347 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.880378 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6290a12-a807-46af-adaa-bafaf7bbb26f" containerName="barbican-api-log" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.881646 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.884061 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.885876 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.887990 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.901414 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f79fc46bf-72hgv"] Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-public-tls-certs\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997182 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afa6e3e-f498-4aa6-a646-4b031664608d-log-httpd\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997204 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6afa6e3e-f498-4aa6-a646-4b031664608d-etc-swift\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997290 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-combined-ca-bundle\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-config-data\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997348 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-internal-tls-certs\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afa6e3e-f498-4aa6-a646-4b031664608d-run-httpd\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:09 crc kubenswrapper[4793]: I0217 20:28:09.997480 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gmj\" (UniqueName: \"kubernetes.io/projected/6afa6e3e-f498-4aa6-a646-4b031664608d-kube-api-access-94gmj\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099623 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-combined-ca-bundle\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099686 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-config-data\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-internal-tls-certs\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099741 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afa6e3e-f498-4aa6-a646-4b031664608d-run-httpd\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099769 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94gmj\" (UniqueName: \"kubernetes.io/projected/6afa6e3e-f498-4aa6-a646-4b031664608d-kube-api-access-94gmj\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099822 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-public-tls-certs\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099862 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afa6e3e-f498-4aa6-a646-4b031664608d-log-httpd\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.099884 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6afa6e3e-f498-4aa6-a646-4b031664608d-etc-swift\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.100397 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afa6e3e-f498-4aa6-a646-4b031664608d-log-httpd\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.100407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6afa6e3e-f498-4aa6-a646-4b031664608d-run-httpd\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.105564 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-config-data\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.105619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-internal-tls-certs\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.106131 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-combined-ca-bundle\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.111916 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6afa6e3e-f498-4aa6-a646-4b031664608d-etc-swift\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.114175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6afa6e3e-f498-4aa6-a646-4b031664608d-public-tls-certs\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.134509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94gmj\" (UniqueName: \"kubernetes.io/projected/6afa6e3e-f498-4aa6-a646-4b031664608d-kube-api-access-94gmj\") pod \"swift-proxy-6f79fc46bf-72hgv\" (UID: \"6afa6e3e-f498-4aa6-a646-4b031664608d\") " pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.207168 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:10 crc kubenswrapper[4793]: I0217 20:28:10.824764 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f79fc46bf-72hgv"] Feb 17 20:28:10 crc kubenswrapper[4793]: W0217 20:28:10.826203 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6afa6e3e_f498_4aa6_a646_4b031664608d.slice/crio-29fa309e225d170d0b0cf28c40928b2d965b85a73df62e1cfcf57a67df9248ec WatchSource:0}: Error finding container 29fa309e225d170d0b0cf28c40928b2d965b85a73df62e1cfcf57a67df9248ec: Status 404 returned error can't find the container with id 29fa309e225d170d0b0cf28c40928b2d965b85a73df62e1cfcf57a67df9248ec Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.060993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f79fc46bf-72hgv" event={"ID":"6afa6e3e-f498-4aa6-a646-4b031664608d","Type":"ContainerStarted","Data":"29fa309e225d170d0b0cf28c40928b2d965b85a73df62e1cfcf57a67df9248ec"} Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.111286 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.111928 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-central-agent" containerID="cri-o://ff8fe10db732c8397a86719c045ee5a5f862a92648c34444e80104140d266fb6" gracePeriod=30 Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.112068 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="proxy-httpd" containerID="cri-o://1cc98e8700a103d9d9aa819172db439742bc981497a7d053695603c9b86fcb9f" gracePeriod=30 Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.112126 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="sg-core" containerID="cri-o://871aa8f1f17090de9e41cda325e6139ae24c7f96acd211bd51fe5596f79a88b7" gracePeriod=30 Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.112286 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-notification-agent" containerID="cri-o://25d7ecb3ef45a366adaa06b2f009b18b477a7e84dbc6fa2a4add9307c6df357d" gracePeriod=30 Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.140479 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75d6fd885d-fw6ln" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 17 20:28:11 crc kubenswrapper[4793]: I0217 20:28:11.140592 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.073530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f79fc46bf-72hgv" event={"ID":"6afa6e3e-f498-4aa6-a646-4b031664608d","Type":"ContainerStarted","Data":"e68b98b7b1c5946552e03ac3ef0836aa68842f615cb4e87e4c5aac83669b6571"} Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.073924 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.073940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f79fc46bf-72hgv" event={"ID":"6afa6e3e-f498-4aa6-a646-4b031664608d","Type":"ContainerStarted","Data":"8b71c4ef3a06216c7ed795b1d701491b223d73a1e0be9b3de800f94b786a85ff"} Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.077678 4793 generic.go:334] "Generic (PLEG): container finished" podID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerID="1cc98e8700a103d9d9aa819172db439742bc981497a7d053695603c9b86fcb9f" exitCode=0 Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.077725 4793 generic.go:334] "Generic (PLEG): container finished" podID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerID="871aa8f1f17090de9e41cda325e6139ae24c7f96acd211bd51fe5596f79a88b7" exitCode=2 Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.077734 4793 generic.go:334] "Generic (PLEG): container finished" podID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerID="ff8fe10db732c8397a86719c045ee5a5f862a92648c34444e80104140d266fb6" exitCode=0 Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.077757 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerDied","Data":"1cc98e8700a103d9d9aa819172db439742bc981497a7d053695603c9b86fcb9f"} Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.077784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerDied","Data":"871aa8f1f17090de9e41cda325e6139ae24c7f96acd211bd51fe5596f79a88b7"} Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.077795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerDied","Data":"ff8fe10db732c8397a86719c045ee5a5f862a92648c34444e80104140d266fb6"} Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.094494 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f79fc46bf-72hgv" podStartSLOduration=3.09447843 podStartE2EDuration="3.09447843s" podCreationTimestamp="2026-02-17 20:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:12.088540663 +0000 UTC m=+1167.380238964" watchObservedRunningTime="2026-02-17 20:28:12.09447843 +0000 UTC m=+1167.386176741" Feb 17 20:28:12 crc kubenswrapper[4793]: I0217 20:28:12.552218 4793 scope.go:117] "RemoveContainer" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" Feb 17 20:28:13 crc kubenswrapper[4793]: I0217 20:28:13.086301 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:14 crc kubenswrapper[4793]: I0217 20:28:14.538985 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:14 crc kubenswrapper[4793]: E0217 20:28:14.539638 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(aedc67e8-05ec-44a4-b1f2-a18d2fde80b2)\"" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.154831 4793 generic.go:334] "Generic (PLEG): container finished" podID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerID="25d7ecb3ef45a366adaa06b2f009b18b477a7e84dbc6fa2a4add9307c6df357d" exitCode=0 Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.154883 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerDied","Data":"25d7ecb3ef45a366adaa06b2f009b18b477a7e84dbc6fa2a4add9307c6df357d"} Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.698953 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-782tr"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.700211 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.722888 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-782tr"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.792283 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-m7spg"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.793946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.808390 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-m7spg"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.844894 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9lm\" (UniqueName: \"kubernetes.io/projected/4740a37c-5ca4-4230-a803-5622eedf745e-kube-api-access-kc9lm\") pod \"nova-api-db-create-782tr\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.844939 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4740a37c-5ca4-4230-a803-5622eedf745e-operator-scripts\") pod \"nova-api-db-create-782tr\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.905587 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fflrc"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.907210 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.913349 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ebb4-account-create-update-qzc2g"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.914653 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.917115 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.924333 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fflrc"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.934318 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ebb4-account-create-update-qzc2g"] Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.947376 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4cfg\" (UniqueName: \"kubernetes.io/projected/e9514926-d5cf-41ca-aeb2-41444d597e1a-kube-api-access-z4cfg\") pod \"nova-cell0-db-create-m7spg\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.947546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9lm\" (UniqueName: \"kubernetes.io/projected/4740a37c-5ca4-4230-a803-5622eedf745e-kube-api-access-kc9lm\") pod \"nova-api-db-create-782tr\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.947583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4740a37c-5ca4-4230-a803-5622eedf745e-operator-scripts\") pod \"nova-api-db-create-782tr\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.947627 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9514926-d5cf-41ca-aeb2-41444d597e1a-operator-scripts\") pod \"nova-cell0-db-create-m7spg\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.948506 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4740a37c-5ca4-4230-a803-5622eedf745e-operator-scripts\") pod \"nova-api-db-create-782tr\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:15 crc kubenswrapper[4793]: I0217 20:28:15.974304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9lm\" (UniqueName: \"kubernetes.io/projected/4740a37c-5ca4-4230-a803-5622eedf745e-kube-api-access-kc9lm\") pod \"nova-api-db-create-782tr\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.017823 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.049887 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8cb\" (UniqueName: \"kubernetes.io/projected/c908ae8c-adac-443d-9073-64b72a90660a-kube-api-access-5m8cb\") pod \"nova-cell1-db-create-fflrc\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.049963 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4cfg\" (UniqueName: \"kubernetes.io/projected/e9514926-d5cf-41ca-aeb2-41444d597e1a-kube-api-access-z4cfg\") pod \"nova-cell0-db-create-m7spg\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.050022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mjb\" (UniqueName: \"kubernetes.io/projected/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-kube-api-access-x6mjb\") pod \"nova-api-ebb4-account-create-update-qzc2g\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.050063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-operator-scripts\") pod \"nova-api-ebb4-account-create-update-qzc2g\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.050091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9514926-d5cf-41ca-aeb2-41444d597e1a-operator-scripts\") pod \"nova-cell0-db-create-m7spg\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.050123 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c908ae8c-adac-443d-9073-64b72a90660a-operator-scripts\") pod \"nova-cell1-db-create-fflrc\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.051508 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9514926-d5cf-41ca-aeb2-41444d597e1a-operator-scripts\") pod \"nova-cell0-db-create-m7spg\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.087480 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4cfg\" (UniqueName: \"kubernetes.io/projected/e9514926-d5cf-41ca-aeb2-41444d597e1a-kube-api-access-z4cfg\") pod \"nova-cell0-db-create-m7spg\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.115145 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.117066 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2742-account-create-update-2qvj2"] Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.118434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.122563 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.129657 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2742-account-create-update-2qvj2"] Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.151162 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8cb\" (UniqueName: \"kubernetes.io/projected/c908ae8c-adac-443d-9073-64b72a90660a-kube-api-access-5m8cb\") pod \"nova-cell1-db-create-fflrc\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.151241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mjb\" (UniqueName: \"kubernetes.io/projected/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-kube-api-access-x6mjb\") pod \"nova-api-ebb4-account-create-update-qzc2g\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.151276 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-operator-scripts\") pod \"nova-api-ebb4-account-create-update-qzc2g\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.151309 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c908ae8c-adac-443d-9073-64b72a90660a-operator-scripts\") pod \"nova-cell1-db-create-fflrc\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.151990 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c908ae8c-adac-443d-9073-64b72a90660a-operator-scripts\") pod \"nova-cell1-db-create-fflrc\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.152573 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-operator-scripts\") pod \"nova-api-ebb4-account-create-update-qzc2g\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.169344 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mjb\" (UniqueName: \"kubernetes.io/projected/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-kube-api-access-x6mjb\") pod \"nova-api-ebb4-account-create-update-qzc2g\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.173323 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8cb\" (UniqueName: \"kubernetes.io/projected/c908ae8c-adac-443d-9073-64b72a90660a-kube-api-access-5m8cb\") pod \"nova-cell1-db-create-fflrc\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.233946 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.244000 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.252549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/868c46d3-c502-4bf7-8553-d7ce9b2a466d-operator-scripts\") pod \"nova-cell0-2742-account-create-update-2qvj2\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.252626 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sgdv\" (UniqueName: \"kubernetes.io/projected/868c46d3-c502-4bf7-8553-d7ce9b2a466d-kube-api-access-8sgdv\") pod \"nova-cell0-2742-account-create-update-2qvj2\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.306189 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6a73-account-create-update-n7s7r"] Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.307742 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.314515 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.329231 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6a73-account-create-update-n7s7r"] Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.355013 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/868c46d3-c502-4bf7-8553-d7ce9b2a466d-operator-scripts\") pod \"nova-cell0-2742-account-create-update-2qvj2\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.355137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sgdv\" (UniqueName: \"kubernetes.io/projected/868c46d3-c502-4bf7-8553-d7ce9b2a466d-kube-api-access-8sgdv\") pod \"nova-cell0-2742-account-create-update-2qvj2\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.355862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/868c46d3-c502-4bf7-8553-d7ce9b2a466d-operator-scripts\") pod \"nova-cell0-2742-account-create-update-2qvj2\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.389308 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sgdv\" (UniqueName: \"kubernetes.io/projected/868c46d3-c502-4bf7-8553-d7ce9b2a466d-kube-api-access-8sgdv\") pod \"nova-cell0-2742-account-create-update-2qvj2\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.450148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.457967 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad590bda-099d-49c9-9eb3-a21c732af87c-operator-scripts\") pod \"nova-cell1-6a73-account-create-update-n7s7r\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.458029 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlnk\" (UniqueName: \"kubernetes.io/projected/ad590bda-099d-49c9-9eb3-a21c732af87c-kube-api-access-rnlnk\") pod \"nova-cell1-6a73-account-create-update-n7s7r\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.560452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad590bda-099d-49c9-9eb3-a21c732af87c-operator-scripts\") pod \"nova-cell1-6a73-account-create-update-n7s7r\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.560497 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnlnk\" (UniqueName: \"kubernetes.io/projected/ad590bda-099d-49c9-9eb3-a21c732af87c-kube-api-access-rnlnk\") pod \"nova-cell1-6a73-account-create-update-n7s7r\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.561379 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad590bda-099d-49c9-9eb3-a21c732af87c-operator-scripts\") pod \"nova-cell1-6a73-account-create-update-n7s7r\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.575210 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnlnk\" (UniqueName: \"kubernetes.io/projected/ad590bda-099d-49c9-9eb3-a21c732af87c-kube-api-access-rnlnk\") pod \"nova-cell1-6a73-account-create-update-n7s7r\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:16 crc kubenswrapper[4793]: I0217 20:28:16.637965 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:17 crc kubenswrapper[4793]: I0217 20:28:17.158356 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d8998fd7c-xvl9z" Feb 17 20:28:17 crc kubenswrapper[4793]: I0217 20:28:17.245770 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fdcdb7d8d-qdv8l"] Feb 17 20:28:17 crc kubenswrapper[4793]: I0217 20:28:17.246155 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fdcdb7d8d-qdv8l" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-api" containerID="cri-o://0afeb54f9642c4dcc9db837e9576832f0d6ec4c6da7fb65b67818b8115fba470" gracePeriod=30 Feb 17 20:28:17 crc kubenswrapper[4793]: I0217 20:28:17.246201 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fdcdb7d8d-qdv8l" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-httpd" containerID="cri-o://6cec9376237b5c03ebb426d0660b75d39712d41eec6bb5f1320e78a5048f2ad2" gracePeriod=30 Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.234064 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dad05b98-c67b-44c9-bfff-4d0b39ad47b4","Type":"ContainerDied","Data":"5912e1925a3d88771fb7d9bbf18f20dadf9cf5a29df3363f9b8140dcb8f0dde9"} Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.234581 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5912e1925a3d88771fb7d9bbf18f20dadf9cf5a29df3363f9b8140dcb8f0dde9" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.258650 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.263426 4793 generic.go:334] "Generic (PLEG): container finished" podID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerID="6cec9376237b5c03ebb426d0660b75d39712d41eec6bb5f1320e78a5048f2ad2" exitCode=0 Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.263471 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcdb7d8d-qdv8l" event={"ID":"b87a7c00-0efd-4456-bd50-c41a6d909aca","Type":"ContainerDied","Data":"6cec9376237b5c03ebb426d0660b75d39712d41eec6bb5f1320e78a5048f2ad2"} Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.317869 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnj9x\" (UniqueName: \"kubernetes.io/projected/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-kube-api-access-rnj9x\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.318200 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-combined-ca-bundle\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.318228 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-config-data\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.318307 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-scripts\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.318330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-sg-core-conf-yaml\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.318515 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-log-httpd\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.318549 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-run-httpd\") pod \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\" (UID: \"dad05b98-c67b-44c9-bfff-4d0b39ad47b4\") " Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.319298 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.327054 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.330075 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-scripts" (OuterVolumeSpecName: "scripts") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.330178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-kube-api-access-rnj9x" (OuterVolumeSpecName: "kube-api-access-rnj9x") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "kube-api-access-rnj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.362842 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.420481 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.420505 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.420515 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnj9x\" (UniqueName: \"kubernetes.io/projected/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-kube-api-access-rnj9x\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.420525 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.420534 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.425614 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.441989 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6a73-account-create-update-n7s7r"] Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.523163 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.545652 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-config-data" (OuterVolumeSpecName: "config-data") pod "dad05b98-c67b-44c9-bfff-4d0b39ad47b4" (UID: "dad05b98-c67b-44c9-bfff-4d0b39ad47b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.625282 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dad05b98-c67b-44c9-bfff-4d0b39ad47b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.644466 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-782tr"] Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.758998 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fflrc"] Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.795965 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2742-account-create-update-2qvj2"] Feb 17 20:28:18 crc kubenswrapper[4793]: W0217 20:28:18.886679 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc908ae8c_adac_443d_9073_64b72a90660a.slice/crio-5d95351cfdfbdc8b0a3cc9c86464ba4f354bbdd57791fa9d176bf167a3451af0 WatchSource:0}: Error finding container 5d95351cfdfbdc8b0a3cc9c86464ba4f354bbdd57791fa9d176bf167a3451af0: Status 404 returned error can't find the container with id 5d95351cfdfbdc8b0a3cc9c86464ba4f354bbdd57791fa9d176bf167a3451af0 Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.934151 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-m7spg"] Feb 17 20:28:18 crc kubenswrapper[4793]: I0217 20:28:18.957715 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ebb4-account-create-update-qzc2g"] Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.282511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7spg" event={"ID":"e9514926-d5cf-41ca-aeb2-41444d597e1a","Type":"ContainerStarted","Data":"fc09ba88fe9392f8c460244d699d981035859c25ad4c5d4ebd82603c31848551"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.287591 4793 generic.go:334] "Generic (PLEG): container finished" podID="ad590bda-099d-49c9-9eb3-a21c732af87c" containerID="73e5afb6aa24b29336bd47851d87446bfb319209439940e0861f142e2d1d4837" exitCode=0 Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.287726 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" event={"ID":"ad590bda-099d-49c9-9eb3-a21c732af87c","Type":"ContainerDied","Data":"73e5afb6aa24b29336bd47851d87446bfb319209439940e0861f142e2d1d4837"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.287779 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" event={"ID":"ad590bda-099d-49c9-9eb3-a21c732af87c","Type":"ContainerStarted","Data":"e49f0dbc1c18f0e8b0d648b6a620760cb3b26f8383eacdf20772d12e4001057f"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.300214 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.312307 4793 generic.go:334] "Generic (PLEG): container finished" podID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerID="0afeb54f9642c4dcc9db837e9576832f0d6ec4c6da7fb65b67818b8115fba470" exitCode=0 Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.312400 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcdb7d8d-qdv8l" event={"ID":"b87a7c00-0efd-4456-bd50-c41a6d909aca","Type":"ContainerDied","Data":"0afeb54f9642c4dcc9db837e9576832f0d6ec4c6da7fb65b67818b8115fba470"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.314945 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" event={"ID":"868c46d3-c502-4bf7-8553-d7ce9b2a466d","Type":"ContainerStarted","Data":"f01bda1fbeb85f372a4528751c19e62c9e91d23bc14e10fd226aba5cec512412"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.316926 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fflrc" event={"ID":"c908ae8c-adac-443d-9073-64b72a90660a","Type":"ContainerStarted","Data":"5d95351cfdfbdc8b0a3cc9c86464ba4f354bbdd57791fa9d176bf167a3451af0"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.318174 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" event={"ID":"3c0ba79e-3947-4453-8d93-c7b4ea51aa86","Type":"ContainerStarted","Data":"7c4740a5d306ed635c262e8e772c8f32d0dee87679bcd88d8ac9a34f91f24faf"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.319478 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-782tr" event={"ID":"4740a37c-5ca4-4230-a803-5622eedf745e","Type":"ContainerStarted","Data":"3e64ce5727213b76b7b03f4c76bbb1eeb249f2558a1211d86bfe179f8c2a46ff"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.319502 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-782tr" event={"ID":"4740a37c-5ca4-4230-a803-5622eedf745e","Type":"ContainerStarted","Data":"4684228c09493749de42e147ec65863302ff8676b71e4973b65675f7efe7536e"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.320663 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.321633 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4","Type":"ContainerStarted","Data":"f61a4dc20ce57da3d247f5b06a1ba9bb8025c0c14403e9bad9f59d9b0c4dddbf"} Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.389272 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.15873644 podStartE2EDuration="14.389217988s" podCreationTimestamp="2026-02-17 20:28:05 +0000 UTC" firstStartedPulling="2026-02-17 20:28:06.747544543 +0000 UTC m=+1162.039242844" lastFinishedPulling="2026-02-17 20:28:17.978026081 +0000 UTC m=+1173.269724392" observedRunningTime="2026-02-17 20:28:19.375642781 +0000 UTC m=+1174.667341102" watchObservedRunningTime="2026-02-17 20:28:19.389217988 +0000 UTC m=+1174.680916309" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.536175 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.536308 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.586523 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.586924 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:19 crc kubenswrapper[4793]: E0217 20:28:19.587317 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="proxy-httpd" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587336 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="proxy-httpd" Feb 17 20:28:19 crc kubenswrapper[4793]: E0217 20:28:19.587367 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-central-agent" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587376 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-central-agent" Feb 17 20:28:19 crc kubenswrapper[4793]: E0217 20:28:19.587397 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="sg-core" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587404 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="sg-core" Feb 17 20:28:19 crc kubenswrapper[4793]: E0217 20:28:19.587422 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-httpd" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587429 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-httpd" Feb 17 20:28:19 crc kubenswrapper[4793]: E0217 20:28:19.587442 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-api" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587449 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-api" Feb 17 20:28:19 crc kubenswrapper[4793]: E0217 20:28:19.587461 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-notification-agent" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587469 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-notification-agent" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587724 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-central-agent" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587748 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="sg-core" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587766 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="proxy-httpd" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587777 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" containerName="ceilometer-notification-agent" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587794 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-api" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.587808 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" containerName="neutron-httpd" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.589977 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.591266 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.592723 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.592832 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.655884 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-combined-ca-bundle\") pod \"b87a7c00-0efd-4456-bd50-c41a6d909aca\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.656166 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-httpd-config\") pod \"b87a7c00-0efd-4456-bd50-c41a6d909aca\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.656320 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-config\") pod \"b87a7c00-0efd-4456-bd50-c41a6d909aca\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.656416 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66nmd\" (UniqueName: \"kubernetes.io/projected/b87a7c00-0efd-4456-bd50-c41a6d909aca-kube-api-access-66nmd\") pod \"b87a7c00-0efd-4456-bd50-c41a6d909aca\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.656505 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-ovndb-tls-certs\") pod \"b87a7c00-0efd-4456-bd50-c41a6d909aca\" (UID: \"b87a7c00-0efd-4456-bd50-c41a6d909aca\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.657584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2xx\" (UniqueName: \"kubernetes.io/projected/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-kube-api-access-dm2xx\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.657621 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-config-data\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.657723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.657854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-log-httpd\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.657879 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-run-httpd\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.657968 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.658064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-scripts\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.670571 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b87a7c00-0efd-4456-bd50-c41a6d909aca" (UID: "b87a7c00-0efd-4456-bd50-c41a6d909aca"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.670663 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.687910 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87a7c00-0efd-4456-bd50-c41a6d909aca-kube-api-access-66nmd" (OuterVolumeSpecName: "kube-api-access-66nmd") pod "b87a7c00-0efd-4456-bd50-c41a6d909aca" (UID: "b87a7c00-0efd-4456-bd50-c41a6d909aca"). InnerVolumeSpecName "kube-api-access-66nmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.758820 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-tls-certs\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.758998 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjmtm\" (UniqueName: \"kubernetes.io/projected/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-kube-api-access-gjmtm\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759045 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-scripts\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759091 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-combined-ca-bundle\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759110 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-logs\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759172 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-secret-key\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759247 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-config-data\") pod \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\" (UID: \"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf\") " Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759546 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-scripts\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759636 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2xx\" (UniqueName: \"kubernetes.io/projected/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-kube-api-access-dm2xx\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759659 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-config-data\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759700 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-log-httpd\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759758 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-run-httpd\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759803 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759847 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.759858 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66nmd\" (UniqueName: \"kubernetes.io/projected/b87a7c00-0efd-4456-bd50-c41a6d909aca-kube-api-access-66nmd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.763349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-logs" (OuterVolumeSpecName: "logs") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.765404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.765674 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-log-httpd\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.766619 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-run-httpd\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.768292 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-kube-api-access-gjmtm" (OuterVolumeSpecName: "kube-api-access-gjmtm") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "kube-api-access-gjmtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.777860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-config" (OuterVolumeSpecName: "config") pod "b87a7c00-0efd-4456-bd50-c41a6d909aca" (UID: "b87a7c00-0efd-4456-bd50-c41a6d909aca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.787361 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.787843 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2xx\" (UniqueName: \"kubernetes.io/projected/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-kube-api-access-dm2xx\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.788204 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.791793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-config-data\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.802673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-scripts\") pod \"ceilometer-0\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.821322 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b87a7c00-0efd-4456-bd50-c41a6d909aca" (UID: "b87a7c00-0efd-4456-bd50-c41a6d909aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.830058 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-scripts" (OuterVolumeSpecName: "scripts") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.841919 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b87a7c00-0efd-4456-bd50-c41a6d909aca" (UID: "b87a7c00-0efd-4456-bd50-c41a6d909aca"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.842027 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.842565 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-config-data" (OuterVolumeSpecName: "config-data") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.861512 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.861681 4793 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.861759 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjmtm\" (UniqueName: \"kubernetes.io/projected/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-kube-api-access-gjmtm\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.861814 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.861875 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b87a7c00-0efd-4456-bd50-c41a6d909aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.862138 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.862194 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.862252 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.862304 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.861813 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" (UID: "6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.931650 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:19 crc kubenswrapper[4793]: I0217 20:28:19.966796 4793 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.226374 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.233034 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f79fc46bf-72hgv" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.341109 4793 generic.go:334] "Generic (PLEG): container finished" podID="4740a37c-5ca4-4230-a803-5622eedf745e" containerID="3e64ce5727213b76b7b03f4c76bbb1eeb249f2558a1211d86bfe179f8c2a46ff" exitCode=0 Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.341296 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-782tr" event={"ID":"4740a37c-5ca4-4230-a803-5622eedf745e","Type":"ContainerDied","Data":"3e64ce5727213b76b7b03f4c76bbb1eeb249f2558a1211d86bfe179f8c2a46ff"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.346890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7spg" event={"ID":"e9514926-d5cf-41ca-aeb2-41444d597e1a","Type":"ContainerStarted","Data":"3b5d0d7b807126fec419fe119b026a07c8076d0243752fd2da322bf17f0a87e6"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.362996 4793 generic.go:334] "Generic (PLEG): container finished" podID="868c46d3-c502-4bf7-8553-d7ce9b2a466d" containerID="668d841ab9833fb36f0ef65dec516b0c70e08667e3daeb7d648c893349262a95" exitCode=0 Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.363194 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" event={"ID":"868c46d3-c502-4bf7-8553-d7ce9b2a466d","Type":"ContainerDied","Data":"668d841ab9833fb36f0ef65dec516b0c70e08667e3daeb7d648c893349262a95"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.371836 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-m7spg" podStartSLOduration=5.371812375 podStartE2EDuration="5.371812375s" podCreationTimestamp="2026-02-17 20:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:20.363206761 +0000 UTC m=+1175.654905082" watchObservedRunningTime="2026-02-17 20:28:20.371812375 +0000 UTC m=+1175.663510686" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.374445 4793 generic.go:334] "Generic (PLEG): container finished" podID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerID="6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a" exitCode=137 Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.374508 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75d6fd885d-fw6ln" event={"ID":"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf","Type":"ContainerDied","Data":"6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.374539 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75d6fd885d-fw6ln" event={"ID":"6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf","Type":"ContainerDied","Data":"fbb2842ea9bd9aeb93cce6bdb3e96c392445be3eaee8deb4710c1b852e937f3d"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.374559 4793 scope.go:117] "RemoveContainer" containerID="3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.374720 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75d6fd885d-fw6ln" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.388273 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fdcdb7d8d-qdv8l" event={"ID":"b87a7c00-0efd-4456-bd50-c41a6d909aca","Type":"ContainerDied","Data":"d2905e783c7681f5e914abc4da3a0f2a77db3c0041fd38335e2af4773d5072ca"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.388356 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fdcdb7d8d-qdv8l" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.393356 4793 generic.go:334] "Generic (PLEG): container finished" podID="c908ae8c-adac-443d-9073-64b72a90660a" containerID="cc59c70bbc9c1e26e1511ed56b5293d1116d33d1d4f8b5355a0d6797e2d9905e" exitCode=0 Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.393412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fflrc" event={"ID":"c908ae8c-adac-443d-9073-64b72a90660a","Type":"ContainerDied","Data":"cc59c70bbc9c1e26e1511ed56b5293d1116d33d1d4f8b5355a0d6797e2d9905e"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.395380 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" event={"ID":"3c0ba79e-3947-4453-8d93-c7b4ea51aa86","Type":"ContainerStarted","Data":"31516eea28ddc1c5dc4a9f0a6269c69472e2a0e31595921e743d213004f1a261"} Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.480830 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fdcdb7d8d-qdv8l"] Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.506942 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7fdcdb7d8d-qdv8l"] Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.569139 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75d6fd885d-fw6ln"] Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.579554 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75d6fd885d-fw6ln"] Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.594206 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.639082 4793 scope.go:117] "RemoveContainer" containerID="6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a" Feb 17 20:28:20 crc kubenswrapper[4793]: W0217 20:28:20.672096 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17e5194d_9dcc_4a15_a67d_3a4a57d60c79.slice/crio-311a8cc3abee2095996da7b2efb5a08a6c6d2674e8ba7b39b53332b830f9cd0f WatchSource:0}: Error finding container 311a8cc3abee2095996da7b2efb5a08a6c6d2674e8ba7b39b53332b830f9cd0f: Status 404 returned error can't find the container with id 311a8cc3abee2095996da7b2efb5a08a6c6d2674e8ba7b39b53332b830f9cd0f Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.726411 4793 scope.go:117] "RemoveContainer" containerID="3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1" Feb 17 20:28:20 crc kubenswrapper[4793]: E0217 20:28:20.729885 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1\": container with ID starting with 3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1 not found: ID does not exist" containerID="3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.729938 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1"} err="failed to get container status \"3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1\": rpc error: code = NotFound desc = could not find container \"3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1\": container with ID starting with 3e010c26342664c3558b2e0e211867c9c895e91944d0dd2d5c0257fc128d37e1 not found: ID does not exist" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.730000 4793 scope.go:117] "RemoveContainer" containerID="6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a" Feb 17 20:28:20 crc kubenswrapper[4793]: E0217 20:28:20.732298 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a\": container with ID starting with 6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a not found: ID does not exist" containerID="6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.732340 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a"} err="failed to get container status \"6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a\": rpc error: code = NotFound desc = could not find container \"6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a\": container with ID starting with 6bee6c7fb7d7d5d31d8eb33b06a65d5d8f2ee8e6a9f85944f7f5f5ad2a282a4a not found: ID does not exist" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.732370 4793 scope.go:117] "RemoveContainer" containerID="6cec9376237b5c03ebb426d0660b75d39712d41eec6bb5f1320e78a5048f2ad2" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.848229 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.853165 4793 scope.go:117] "RemoveContainer" containerID="0afeb54f9642c4dcc9db837e9576832f0d6ec4c6da7fb65b67818b8115fba470" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.853950 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.905964 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnlnk\" (UniqueName: \"kubernetes.io/projected/ad590bda-099d-49c9-9eb3-a21c732af87c-kube-api-access-rnlnk\") pod \"ad590bda-099d-49c9-9eb3-a21c732af87c\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.906130 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad590bda-099d-49c9-9eb3-a21c732af87c-operator-scripts\") pod \"ad590bda-099d-49c9-9eb3-a21c732af87c\" (UID: \"ad590bda-099d-49c9-9eb3-a21c732af87c\") " Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.906219 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc9lm\" (UniqueName: \"kubernetes.io/projected/4740a37c-5ca4-4230-a803-5622eedf745e-kube-api-access-kc9lm\") pod \"4740a37c-5ca4-4230-a803-5622eedf745e\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.906265 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4740a37c-5ca4-4230-a803-5622eedf745e-operator-scripts\") pod \"4740a37c-5ca4-4230-a803-5622eedf745e\" (UID: \"4740a37c-5ca4-4230-a803-5622eedf745e\") " Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.906599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad590bda-099d-49c9-9eb3-a21c732af87c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad590bda-099d-49c9-9eb3-a21c732af87c" (UID: "ad590bda-099d-49c9-9eb3-a21c732af87c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.906757 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4740a37c-5ca4-4230-a803-5622eedf745e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4740a37c-5ca4-4230-a803-5622eedf745e" (UID: "4740a37c-5ca4-4230-a803-5622eedf745e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.907084 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad590bda-099d-49c9-9eb3-a21c732af87c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.907103 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4740a37c-5ca4-4230-a803-5622eedf745e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.910462 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4740a37c-5ca4-4230-a803-5622eedf745e-kube-api-access-kc9lm" (OuterVolumeSpecName: "kube-api-access-kc9lm") pod "4740a37c-5ca4-4230-a803-5622eedf745e" (UID: "4740a37c-5ca4-4230-a803-5622eedf745e"). InnerVolumeSpecName "kube-api-access-kc9lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:20 crc kubenswrapper[4793]: I0217 20:28:20.920504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad590bda-099d-49c9-9eb3-a21c732af87c-kube-api-access-rnlnk" (OuterVolumeSpecName: "kube-api-access-rnlnk") pod "ad590bda-099d-49c9-9eb3-a21c732af87c" (UID: "ad590bda-099d-49c9-9eb3-a21c732af87c"). InnerVolumeSpecName "kube-api-access-rnlnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.009557 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnlnk\" (UniqueName: \"kubernetes.io/projected/ad590bda-099d-49c9-9eb3-a21c732af87c-kube-api-access-rnlnk\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.009967 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc9lm\" (UniqueName: \"kubernetes.io/projected/4740a37c-5ca4-4230-a803-5622eedf745e-kube-api-access-kc9lm\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.405178 4793 generic.go:334] "Generic (PLEG): container finished" podID="e9514926-d5cf-41ca-aeb2-41444d597e1a" containerID="3b5d0d7b807126fec419fe119b026a07c8076d0243752fd2da322bf17f0a87e6" exitCode=0 Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.405238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7spg" event={"ID":"e9514926-d5cf-41ca-aeb2-41444d597e1a","Type":"ContainerDied","Data":"3b5d0d7b807126fec419fe119b026a07c8076d0243752fd2da322bf17f0a87e6"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.406813 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" event={"ID":"ad590bda-099d-49c9-9eb3-a21c732af87c","Type":"ContainerDied","Data":"e49f0dbc1c18f0e8b0d648b6a620760cb3b26f8383eacdf20772d12e4001057f"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.406836 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6a73-account-create-update-n7s7r" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.406853 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49f0dbc1c18f0e8b0d648b6a620760cb3b26f8383eacdf20772d12e4001057f" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.411602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerStarted","Data":"2c7d7ae24923f07174e4419580d846cf81882a749515366cc9b4117200f25f95"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.411644 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerStarted","Data":"a48964a866b21be64204b1cc09b433db05bc8630b3c53114f564d2fb9f52011c"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.411662 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerStarted","Data":"311a8cc3abee2095996da7b2efb5a08a6c6d2674e8ba7b39b53332b830f9cd0f"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.413259 4793 generic.go:334] "Generic (PLEG): container finished" podID="3c0ba79e-3947-4453-8d93-c7b4ea51aa86" containerID="31516eea28ddc1c5dc4a9f0a6269c69472e2a0e31595921e743d213004f1a261" exitCode=0 Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.413319 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" event={"ID":"3c0ba79e-3947-4453-8d93-c7b4ea51aa86","Type":"ContainerDied","Data":"31516eea28ddc1c5dc4a9f0a6269c69472e2a0e31595921e743d213004f1a261"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.415036 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-782tr" event={"ID":"4740a37c-5ca4-4230-a803-5622eedf745e","Type":"ContainerDied","Data":"4684228c09493749de42e147ec65863302ff8676b71e4973b65675f7efe7536e"} Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.415069 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4684228c09493749de42e147ec65863302ff8676b71e4973b65675f7efe7536e" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.415107 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-782tr" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.561941 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" path="/var/lib/kubelet/pods/6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf/volumes" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.562524 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87a7c00-0efd-4456-bd50-c41a6d909aca" path="/var/lib/kubelet/pods/b87a7c00-0efd-4456-bd50-c41a6d909aca/volumes" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.563129 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad05b98-c67b-44c9-bfff-4d0b39ad47b4" path="/var/lib/kubelet/pods/dad05b98-c67b-44c9-bfff-4d0b39ad47b4/volumes" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.920648 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.931010 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.964943 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:28:21 crc kubenswrapper[4793]: I0217 20:28:21.964982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:28:21 crc kubenswrapper[4793]: E0217 20:28:21.967738 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe is running failed: container process not found" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 20:28:21 crc kubenswrapper[4793]: E0217 20:28:21.968887 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe is running failed: container process not found" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 20:28:21 crc kubenswrapper[4793]: E0217 20:28:21.970371 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe is running failed: container process not found" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 20:28:21 crc kubenswrapper[4793]: E0217 20:28:21.970427 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe is running failed: container process not found" probeType="Startup" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.030187 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m8cb\" (UniqueName: \"kubernetes.io/projected/c908ae8c-adac-443d-9073-64b72a90660a-kube-api-access-5m8cb\") pod \"c908ae8c-adac-443d-9073-64b72a90660a\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.030260 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c908ae8c-adac-443d-9073-64b72a90660a-operator-scripts\") pod \"c908ae8c-adac-443d-9073-64b72a90660a\" (UID: \"c908ae8c-adac-443d-9073-64b72a90660a\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.030445 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sgdv\" (UniqueName: \"kubernetes.io/projected/868c46d3-c502-4bf7-8553-d7ce9b2a466d-kube-api-access-8sgdv\") pod \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.030608 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/868c46d3-c502-4bf7-8553-d7ce9b2a466d-operator-scripts\") pod \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\" (UID: \"868c46d3-c502-4bf7-8553-d7ce9b2a466d\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.039225 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/868c46d3-c502-4bf7-8553-d7ce9b2a466d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "868c46d3-c502-4bf7-8553-d7ce9b2a466d" (UID: "868c46d3-c502-4bf7-8553-d7ce9b2a466d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.039765 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c908ae8c-adac-443d-9073-64b72a90660a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c908ae8c-adac-443d-9073-64b72a90660a" (UID: "c908ae8c-adac-443d-9073-64b72a90660a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.039863 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c908ae8c-adac-443d-9073-64b72a90660a-kube-api-access-5m8cb" (OuterVolumeSpecName: "kube-api-access-5m8cb") pod "c908ae8c-adac-443d-9073-64b72a90660a" (UID: "c908ae8c-adac-443d-9073-64b72a90660a"). InnerVolumeSpecName "kube-api-access-5m8cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.042895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868c46d3-c502-4bf7-8553-d7ce9b2a466d-kube-api-access-8sgdv" (OuterVolumeSpecName: "kube-api-access-8sgdv") pod "868c46d3-c502-4bf7-8553-d7ce9b2a466d" (UID: "868c46d3-c502-4bf7-8553-d7ce9b2a466d"). InnerVolumeSpecName "kube-api-access-8sgdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.134651 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sgdv\" (UniqueName: \"kubernetes.io/projected/868c46d3-c502-4bf7-8553-d7ce9b2a466d-kube-api-access-8sgdv\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.134679 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/868c46d3-c502-4bf7-8553-d7ce9b2a466d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.134713 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m8cb\" (UniqueName: \"kubernetes.io/projected/c908ae8c-adac-443d-9073-64b72a90660a-kube-api-access-5m8cb\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.134722 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c908ae8c-adac-443d-9073-64b72a90660a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.167066 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.236260 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-operator-scripts\") pod \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.236447 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6mjb\" (UniqueName: \"kubernetes.io/projected/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-kube-api-access-x6mjb\") pod \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\" (UID: \"3c0ba79e-3947-4453-8d93-c7b4ea51aa86\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.236749 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c0ba79e-3947-4453-8d93-c7b4ea51aa86" (UID: "3c0ba79e-3947-4453-8d93-c7b4ea51aa86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.236940 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.256503 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-kube-api-access-x6mjb" (OuterVolumeSpecName: "kube-api-access-x6mjb") pod "3c0ba79e-3947-4453-8d93-c7b4ea51aa86" (UID: "3c0ba79e-3947-4453-8d93-c7b4ea51aa86"). InnerVolumeSpecName "kube-api-access-x6mjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.339042 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6mjb\" (UniqueName: \"kubernetes.io/projected/3c0ba79e-3947-4453-8d93-c7b4ea51aa86-kube-api-access-x6mjb\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.424331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerStarted","Data":"4e1f03a2b47c21712b26f32e71922d2ae0c87e536b4ee6a9db714eeb0c27049b"} Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.425516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fflrc" event={"ID":"c908ae8c-adac-443d-9073-64b72a90660a","Type":"ContainerDied","Data":"5d95351cfdfbdc8b0a3cc9c86464ba4f354bbdd57791fa9d176bf167a3451af0"} Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.425550 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d95351cfdfbdc8b0a3cc9c86464ba4f354bbdd57791fa9d176bf167a3451af0" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.425571 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fflrc" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.426520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" event={"ID":"3c0ba79e-3947-4453-8d93-c7b4ea51aa86","Type":"ContainerDied","Data":"7c4740a5d306ed635c262e8e772c8f32d0dee87679bcd88d8ac9a34f91f24faf"} Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.426543 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebb4-account-create-update-qzc2g" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.426550 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c4740a5d306ed635c262e8e772c8f32d0dee87679bcd88d8ac9a34f91f24faf" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.428029 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" exitCode=1 Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.428074 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe"} Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.428097 4793 scope.go:117] "RemoveContainer" containerID="482d219bc6ca6e52bff83b3958809678c883c2f0ec84a89a77cd763ddb26ce8d" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.428721 4793 scope.go:117] "RemoveContainer" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" Feb 17 20:28:22 crc kubenswrapper[4793]: E0217 20:28:22.428996 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.431114 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.432300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2742-account-create-update-2qvj2" event={"ID":"868c46d3-c502-4bf7-8553-d7ce9b2a466d","Type":"ContainerDied","Data":"f01bda1fbeb85f372a4528751c19e62c9e91d23bc14e10fd226aba5cec512412"} Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.432363 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f01bda1fbeb85f372a4528751c19e62c9e91d23bc14e10fd226aba5cec512412" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.747972 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.851909 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9514926-d5cf-41ca-aeb2-41444d597e1a-operator-scripts\") pod \"e9514926-d5cf-41ca-aeb2-41444d597e1a\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.851982 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4cfg\" (UniqueName: \"kubernetes.io/projected/e9514926-d5cf-41ca-aeb2-41444d597e1a-kube-api-access-z4cfg\") pod \"e9514926-d5cf-41ca-aeb2-41444d597e1a\" (UID: \"e9514926-d5cf-41ca-aeb2-41444d597e1a\") " Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.852902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9514926-d5cf-41ca-aeb2-41444d597e1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9514926-d5cf-41ca-aeb2-41444d597e1a" (UID: "e9514926-d5cf-41ca-aeb2-41444d597e1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.857888 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9514926-d5cf-41ca-aeb2-41444d597e1a-kube-api-access-z4cfg" (OuterVolumeSpecName: "kube-api-access-z4cfg") pod "e9514926-d5cf-41ca-aeb2-41444d597e1a" (UID: "e9514926-d5cf-41ca-aeb2-41444d597e1a"). InnerVolumeSpecName "kube-api-access-z4cfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.954245 4793 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9514926-d5cf-41ca-aeb2-41444d597e1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:22 crc kubenswrapper[4793]: I0217 20:28:22.954289 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4cfg\" (UniqueName: \"kubernetes.io/projected/e9514926-d5cf-41ca-aeb2-41444d597e1a-kube-api-access-z4cfg\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:23 crc kubenswrapper[4793]: I0217 20:28:23.440831 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7spg" event={"ID":"e9514926-d5cf-41ca-aeb2-41444d597e1a","Type":"ContainerDied","Data":"fc09ba88fe9392f8c460244d699d981035859c25ad4c5d4ebd82603c31848551"} Feb 17 20:28:23 crc kubenswrapper[4793]: I0217 20:28:23.440873 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc09ba88fe9392f8c460244d699d981035859c25ad4c5d4ebd82603c31848551" Feb 17 20:28:23 crc kubenswrapper[4793]: I0217 20:28:23.440848 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7spg" Feb 17 20:28:25 crc kubenswrapper[4793]: I0217 20:28:25.472758 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerStarted","Data":"f9a4920bf60fcc401d551336b9fb3d17f20e49199377562f74992b26e95b66da"} Feb 17 20:28:25 crc kubenswrapper[4793]: I0217 20:28:25.502587 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.933303986 podStartE2EDuration="6.502567559s" podCreationTimestamp="2026-02-17 20:28:19 +0000 UTC" firstStartedPulling="2026-02-17 20:28:20.679612614 +0000 UTC m=+1175.971310925" lastFinishedPulling="2026-02-17 20:28:24.248876167 +0000 UTC m=+1179.540574498" observedRunningTime="2026-02-17 20:28:25.493324559 +0000 UTC m=+1180.785022880" watchObservedRunningTime="2026-02-17 20:28:25.502567559 +0000 UTC m=+1180.794265870" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.384756 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j65pr"] Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385366 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9514926-d5cf-41ca-aeb2-41444d597e1a" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385382 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9514926-d5cf-41ca-aeb2-41444d597e1a" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385397 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad590bda-099d-49c9-9eb3-a21c732af87c" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385404 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad590bda-099d-49c9-9eb3-a21c732af87c" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385415 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon-log" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385421 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon-log" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385433 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4740a37c-5ca4-4230-a803-5622eedf745e" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385439 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4740a37c-5ca4-4230-a803-5622eedf745e" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385447 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908ae8c-adac-443d-9073-64b72a90660a" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385452 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908ae8c-adac-443d-9073-64b72a90660a" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385472 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0ba79e-3947-4453-8d93-c7b4ea51aa86" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385478 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0ba79e-3947-4453-8d93-c7b4ea51aa86" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385487 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385493 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.385505 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868c46d3-c502-4bf7-8553-d7ce9b2a466d" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385512 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="868c46d3-c502-4bf7-8553-d7ce9b2a466d" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385677 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908ae8c-adac-443d-9073-64b72a90660a" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385704 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="868c46d3-c502-4bf7-8553-d7ce9b2a466d" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385714 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0ba79e-3947-4453-8d93-c7b4ea51aa86" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385723 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385729 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4740a37c-5ca4-4230-a803-5622eedf745e" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385739 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad590bda-099d-49c9-9eb3-a21c732af87c" containerName="mariadb-account-create-update" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385751 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c302c0d-5edb-4f33-b2ff-7f31bd9c13bf" containerName="horizon-log" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.385763 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9514926-d5cf-41ca-aeb2-41444d597e1a" containerName="mariadb-database-create" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.386338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: W0217 20:28:26.387662 4793 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-m89zr": failed to list *v1.Secret: secrets "nova-nova-dockercfg-m89zr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.387713 4793 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-m89zr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-nova-dockercfg-m89zr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:28:26 crc kubenswrapper[4793]: W0217 20:28:26.387776 4793 reflector.go:561] object-"openstack"/"nova-cell0-conductor-scripts": failed to list *v1.Secret: secrets "nova-cell0-conductor-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.387788 4793 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:28:26 crc kubenswrapper[4793]: W0217 20:28:26.387976 4793 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: secrets "nova-cell0-conductor-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 17 20:28:26 crc kubenswrapper[4793]: E0217 20:28:26.387997 4793 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.406353 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j65pr"] Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.421601 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-config-data\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.421712 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55mm\" (UniqueName: \"kubernetes.io/projected/c7625c8d-9409-461d-98e0-2a9507baa803-kube-api-access-b55mm\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.421745 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.421787 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.480334 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.505123 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.523608 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.523747 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-config-data\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.523872 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b55mm\" (UniqueName: \"kubernetes.io/projected/c7625c8d-9409-461d-98e0-2a9507baa803-kube-api-access-b55mm\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.523905 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.541668 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.545128 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:26 crc kubenswrapper[4793]: I0217 20:28:26.548423 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b55mm\" (UniqueName: \"kubernetes.io/projected/c7625c8d-9409-461d-98e0-2a9507baa803-kube-api-access-b55mm\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:27 crc kubenswrapper[4793]: I0217 20:28:27.349878 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 20:28:27 crc kubenswrapper[4793]: I0217 20:28:27.362087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-config-data\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:27 crc kubenswrapper[4793]: I0217 20:28:27.380942 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m89zr" Feb 17 20:28:27 crc kubenswrapper[4793]: I0217 20:28:27.492311 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerStarted","Data":"ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60"} Feb 17 20:28:27 crc kubenswrapper[4793]: E0217 20:28:27.526626 4793 secret.go:188] Couldn't get secret openstack/nova-cell0-conductor-scripts: failed to sync secret cache: timed out waiting for the condition Feb 17 20:28:27 crc kubenswrapper[4793]: E0217 20:28:27.526767 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts podName:c7625c8d-9409-461d-98e0-2a9507baa803 nodeName:}" failed. No retries permitted until 2026-02-17 20:28:28.026739708 +0000 UTC m=+1183.318438039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts") pod "nova-cell0-conductor-db-sync-j65pr" (UID: "c7625c8d-9409-461d-98e0-2a9507baa803") : failed to sync secret cache: timed out waiting for the condition Feb 17 20:28:27 crc kubenswrapper[4793]: I0217 20:28:27.572574 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.064210 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.078809 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts\") pod \"nova-cell0-conductor-db-sync-j65pr\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.218635 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.504616 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-central-agent" containerID="cri-o://a48964a866b21be64204b1cc09b433db05bc8630b3c53114f564d2fb9f52011c" gracePeriod=30 Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.505063 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="sg-core" containerID="cri-o://4e1f03a2b47c21712b26f32e71922d2ae0c87e536b4ee6a9db714eeb0c27049b" gracePeriod=30 Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.505114 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="proxy-httpd" containerID="cri-o://f9a4920bf60fcc401d551336b9fb3d17f20e49199377562f74992b26e95b66da" gracePeriod=30 Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.505129 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-notification-agent" containerID="cri-o://2c7d7ae24923f07174e4419580d846cf81882a749515366cc9b4117200f25f95" gracePeriod=30 Feb 17 20:28:28 crc kubenswrapper[4793]: I0217 20:28:28.735763 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j65pr"] Feb 17 20:28:28 crc kubenswrapper[4793]: W0217 20:28:28.740436 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7625c8d_9409_461d_98e0_2a9507baa803.slice/crio-f82555372a6b1021af09a029ff6fd2328a60049591d1a51c426917a124e00b99 WatchSource:0}: Error finding container f82555372a6b1021af09a029ff6fd2328a60049591d1a51c426917a124e00b99: Status 404 returned error can't find the container with id f82555372a6b1021af09a029ff6fd2328a60049591d1a51c426917a124e00b99 Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.517909 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j65pr" event={"ID":"c7625c8d-9409-461d-98e0-2a9507baa803","Type":"ContainerStarted","Data":"f82555372a6b1021af09a029ff6fd2328a60049591d1a51c426917a124e00b99"} Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526707 4793 generic.go:334] "Generic (PLEG): container finished" podID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerID="f9a4920bf60fcc401d551336b9fb3d17f20e49199377562f74992b26e95b66da" exitCode=0 Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526750 4793 generic.go:334] "Generic (PLEG): container finished" podID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerID="4e1f03a2b47c21712b26f32e71922d2ae0c87e536b4ee6a9db714eeb0c27049b" exitCode=2 Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526764 4793 generic.go:334] "Generic (PLEG): container finished" podID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerID="2c7d7ae24923f07174e4419580d846cf81882a749515366cc9b4117200f25f95" exitCode=0 Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526774 4793 generic.go:334] "Generic (PLEG): container finished" podID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerID="a48964a866b21be64204b1cc09b433db05bc8630b3c53114f564d2fb9f52011c" exitCode=0 Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526719 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerDied","Data":"f9a4920bf60fcc401d551336b9fb3d17f20e49199377562f74992b26e95b66da"} Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526817 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerDied","Data":"4e1f03a2b47c21712b26f32e71922d2ae0c87e536b4ee6a9db714eeb0c27049b"} Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerDied","Data":"2c7d7ae24923f07174e4419580d846cf81882a749515366cc9b4117200f25f95"} Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerDied","Data":"a48964a866b21be64204b1cc09b433db05bc8630b3c53114f564d2fb9f52011c"} Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526859 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17e5194d-9dcc-4a15-a67d-3a4a57d60c79","Type":"ContainerDied","Data":"311a8cc3abee2095996da7b2efb5a08a6c6d2674e8ba7b39b53332b830f9cd0f"} Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.526870 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311a8cc3abee2095996da7b2efb5a08a6c6d2674e8ba7b39b53332b830f9cd0f" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.576820 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701147 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-log-httpd\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701284 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-sg-core-conf-yaml\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701376 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-combined-ca-bundle\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701402 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-run-httpd\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701468 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm2xx\" (UniqueName: \"kubernetes.io/projected/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-kube-api-access-dm2xx\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701493 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-scripts\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.701588 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-config-data\") pod \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\" (UID: \"17e5194d-9dcc-4a15-a67d-3a4a57d60c79\") " Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.702079 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.702302 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.703340 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.721945 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-scripts" (OuterVolumeSpecName: "scripts") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.722152 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-kube-api-access-dm2xx" (OuterVolumeSpecName: "kube-api-access-dm2xx") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "kube-api-access-dm2xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.736860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.795272 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.803816 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.803845 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.803855 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.803923 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm2xx\" (UniqueName: \"kubernetes.io/projected/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-kube-api-access-dm2xx\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.803935 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.832897 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-config-data" (OuterVolumeSpecName: "config-data") pod "17e5194d-9dcc-4a15-a67d-3a4a57d60c79" (UID: "17e5194d-9dcc-4a15-a67d-3a4a57d60c79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:29 crc kubenswrapper[4793]: I0217 20:28:29.905508 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e5194d-9dcc-4a15-a67d-3a4a57d60c79-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.536898 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.572403 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.580492 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.601556 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:30 crc kubenswrapper[4793]: E0217 20:28:30.602026 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="sg-core" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602043 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="sg-core" Feb 17 20:28:30 crc kubenswrapper[4793]: E0217 20:28:30.602086 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-notification-agent" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602093 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-notification-agent" Feb 17 20:28:30 crc kubenswrapper[4793]: E0217 20:28:30.602103 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-central-agent" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602109 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-central-agent" Feb 17 20:28:30 crc kubenswrapper[4793]: E0217 20:28:30.602123 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="proxy-httpd" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602129 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="proxy-httpd" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602292 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="proxy-httpd" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602316 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-notification-agent" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602324 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="sg-core" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.602335 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" containerName="ceilometer-central-agent" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.603952 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.608973 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.610720 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.612845 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719317 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-log-httpd\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719368 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrsv\" (UniqueName: \"kubernetes.io/projected/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-kube-api-access-wxrsv\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719583 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-run-httpd\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719637 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-scripts\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719757 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.719844 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-config-data\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822051 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822467 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822510 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-config-data\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822559 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-log-httpd\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822590 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrsv\" (UniqueName: \"kubernetes.io/projected/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-kube-api-access-wxrsv\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-run-httpd\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.822678 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-scripts\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.823301 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-log-httpd\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.823327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-run-httpd\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.826911 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.827669 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.827720 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-config-data\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.834229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-scripts\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.845864 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrsv\" (UniqueName: \"kubernetes.io/projected/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-kube-api-access-wxrsv\") pod \"ceilometer-0\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " pod="openstack/ceilometer-0" Feb 17 20:28:30 crc kubenswrapper[4793]: I0217 20:28:30.924174 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.430636 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:31 crc kubenswrapper[4793]: W0217 20:28:31.431433 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6b72cdd_33e1_4a5b_89a4_a05521d47d1c.slice/crio-7f7bba5b292570099eb8a130a3cdbd8bcc0118142e839d51bc821c89ad572141 WatchSource:0}: Error finding container 7f7bba5b292570099eb8a130a3cdbd8bcc0118142e839d51bc821c89ad572141: Status 404 returned error can't find the container with id 7f7bba5b292570099eb8a130a3cdbd8bcc0118142e839d51bc821c89ad572141 Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.553143 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e5194d-9dcc-4a15-a67d-3a4a57d60c79" path="/var/lib/kubelet/pods/17e5194d-9dcc-4a15-a67d-3a4a57d60c79/volumes" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.553868 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerStarted","Data":"7f7bba5b292570099eb8a130a3cdbd8bcc0118142e839d51bc821c89ad572141"} Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.602871 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.603104 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-log" containerID="cri-o://e16209bb0f333729733a08fb8e69bb0af0da54c6b75bd74aec90c55f896f4bb7" gracePeriod=30 Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.603487 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-httpd" containerID="cri-o://f8b6d9a85408906cc38d7c40fd39e029752f54465e1b976639f5aa4f8261f34e" gracePeriod=30 Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.732213 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.901008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.901059 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.933755 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.962360 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.962412 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:28:31 crc kubenswrapper[4793]: I0217 20:28:31.963328 4793 scope.go:117] "RemoveContainer" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" Feb 17 20:28:31 crc kubenswrapper[4793]: E0217 20:28:31.963570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.513591 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.564217 4793 generic.go:334] "Generic (PLEG): container finished" podID="74d89443-6b8f-4757-951b-9b532755c158" containerID="f8b6d9a85408906cc38d7c40fd39e029752f54465e1b976639f5aa4f8261f34e" exitCode=0 Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.564245 4793 generic.go:334] "Generic (PLEG): container finished" podID="74d89443-6b8f-4757-951b-9b532755c158" containerID="e16209bb0f333729733a08fb8e69bb0af0da54c6b75bd74aec90c55f896f4bb7" exitCode=143 Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.564277 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74d89443-6b8f-4757-951b-9b532755c158","Type":"ContainerDied","Data":"f8b6d9a85408906cc38d7c40fd39e029752f54465e1b976639f5aa4f8261f34e"} Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.564322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74d89443-6b8f-4757-951b-9b532755c158","Type":"ContainerDied","Data":"e16209bb0f333729733a08fb8e69bb0af0da54c6b75bd74aec90c55f896f4bb7"} Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.566083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerStarted","Data":"31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705"} Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.566121 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerStarted","Data":"9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844"} Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.603661 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:32 crc kubenswrapper[4793]: I0217 20:28:32.686065 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:28:34 crc kubenswrapper[4793]: I0217 20:28:34.588461 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" containerID="cri-o://ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60" gracePeriod=30 Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.233419 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.373370 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d5f5ff8-cl56f" Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.454899 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6754ff86f4-gttvc"] Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.455192 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6754ff86f4-gttvc" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-log" containerID="cri-o://a159f40005f0fb7016e43bb8172d4d61432980ff231d5c4fdbf64d884b3e4d53" gracePeriod=30 Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.455743 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6754ff86f4-gttvc" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-api" containerID="cri-o://82c7dd80439b9e2588c2e0e26682e953de661e158ea27498460087c68f291070" gracePeriod=30 Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.599914 4793 generic.go:334] "Generic (PLEG): container finished" podID="66324f49-be26-4cca-a237-cf6a31ab771f" containerID="a159f40005f0fb7016e43bb8172d4d61432980ff231d5c4fdbf64d884b3e4d53" exitCode=143 Feb 17 20:28:35 crc kubenswrapper[4793]: I0217 20:28:35.599987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6754ff86f4-gttvc" event={"ID":"66324f49-be26-4cca-a237-cf6a31ab771f","Type":"ContainerDied","Data":"a159f40005f0fb7016e43bb8172d4d61432980ff231d5c4fdbf64d884b3e4d53"} Feb 17 20:28:36 crc kubenswrapper[4793]: I0217 20:28:36.609939 4793 generic.go:334] "Generic (PLEG): container finished" podID="66324f49-be26-4cca-a237-cf6a31ab771f" containerID="82c7dd80439b9e2588c2e0e26682e953de661e158ea27498460087c68f291070" exitCode=0 Feb 17 20:28:36 crc kubenswrapper[4793]: I0217 20:28:36.609980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6754ff86f4-gttvc" event={"ID":"66324f49-be26-4cca-a237-cf6a31ab771f","Type":"ContainerDied","Data":"82c7dd80439b9e2588c2e0e26682e953de661e158ea27498460087c68f291070"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.067925 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.115714 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.186795 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-internal-tls-certs\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-combined-ca-bundle\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187026 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-scripts\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187123 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187153 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsj2h\" (UniqueName: \"kubernetes.io/projected/74d89443-6b8f-4757-951b-9b532755c158-kube-api-access-qsj2h\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187176 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-config-data\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187222 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-logs\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.187236 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-httpd-run\") pod \"74d89443-6b8f-4757-951b-9b532755c158\" (UID: \"74d89443-6b8f-4757-951b-9b532755c158\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.189789 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.189815 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-logs" (OuterVolumeSpecName: "logs") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.193823 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d89443-6b8f-4757-951b-9b532755c158-kube-api-access-qsj2h" (OuterVolumeSpecName: "kube-api-access-qsj2h") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "kube-api-access-qsj2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.195330 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.208165 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-scripts" (OuterVolumeSpecName: "scripts") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.279321 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290274 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-public-tls-certs\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290409 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-combined-ca-bundle\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290508 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-scripts\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290562 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-config-data\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290580 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrw2\" (UniqueName: \"kubernetes.io/projected/66324f49-be26-4cca-a237-cf6a31ab771f-kube-api-access-znrw2\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290612 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-internal-tls-certs\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.290645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66324f49-be26-4cca-a237-cf6a31ab771f-logs\") pod \"66324f49-be26-4cca-a237-cf6a31ab771f\" (UID: \"66324f49-be26-4cca-a237-cf6a31ab771f\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.291062 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.291080 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsj2h\" (UniqueName: \"kubernetes.io/projected/74d89443-6b8f-4757-951b-9b532755c158-kube-api-access-qsj2h\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.291091 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.291100 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74d89443-6b8f-4757-951b-9b532755c158-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.291108 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.291117 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.292640 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66324f49-be26-4cca-a237-cf6a31ab771f-logs" (OuterVolumeSpecName: "logs") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.295320 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-scripts" (OuterVolumeSpecName: "scripts") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.297991 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66324f49-be26-4cca-a237-cf6a31ab771f-kube-api-access-znrw2" (OuterVolumeSpecName: "kube-api-access-znrw2") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "kube-api-access-znrw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.299769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.343127 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.343865 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-config-data" (OuterVolumeSpecName: "config-data") pod "74d89443-6b8f-4757-951b-9b532755c158" (UID: "74d89443-6b8f-4757-951b-9b532755c158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.384377 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.388953 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-config-data" (OuterVolumeSpecName: "config-data") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393208 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393245 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393288 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393300 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393312 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrw2\" (UniqueName: \"kubernetes.io/projected/66324f49-be26-4cca-a237-cf6a31ab771f-kube-api-access-znrw2\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393324 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66324f49-be26-4cca-a237-cf6a31ab771f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393336 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74d89443-6b8f-4757-951b-9b532755c158-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.393376 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.469425 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.473101 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "66324f49-be26-4cca-a237-cf6a31ab771f" (UID: "66324f49-be26-4cca-a237-cf6a31ab771f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.495414 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.495456 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66324f49-be26-4cca-a237-cf6a31ab771f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.545004 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.644300 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74d89443-6b8f-4757-951b-9b532755c158","Type":"ContainerDied","Data":"0fcff6565647b966caef15546aae54e9dd26f2796efbd52296731ba238e6ce3a"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.644578 4793 scope.go:117] "RemoveContainer" containerID="f8b6d9a85408906cc38d7c40fd39e029752f54465e1b976639f5aa4f8261f34e" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.644924 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.659398 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6754ff86f4-gttvc" event={"ID":"66324f49-be26-4cca-a237-cf6a31ab771f","Type":"ContainerDied","Data":"72d15753e29ea196fb3c22db76265d3340787c95c300958c85bb9860a2a9a766"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.659430 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6754ff86f4-gttvc" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.674108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerStarted","Data":"1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.674924 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.688731 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.697155 4793 scope.go:117] "RemoveContainer" containerID="e16209bb0f333729733a08fb8e69bb0af0da54c6b75bd74aec90c55f896f4bb7" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.697351 4793 generic.go:334] "Generic (PLEG): container finished" podID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerID="ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60" exitCode=0 Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.697575 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.697814 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerDied","Data":"ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.697875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2","Type":"ContainerDied","Data":"08be7c22df6b5d5dbf619279440963b317cfbb1a74e78b93ea0859f96f5ddcf3"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.697907 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6754ff86f4-gttvc"] Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.702082 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-combined-ca-bundle\") pod \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.702151 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-config-data\") pod \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.702282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-logs\") pod \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.702380 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27984\" (UniqueName: \"kubernetes.io/projected/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-kube-api-access-27984\") pod \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.702482 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-custom-prometheus-ca\") pod \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\" (UID: \"aedc67e8-05ec-44a4-b1f2-a18d2fde80b2\") " Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.702983 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-logs" (OuterVolumeSpecName: "logs") pod "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" (UID: "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.703207 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.705273 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j65pr" event={"ID":"c7625c8d-9409-461d-98e0-2a9507baa803","Type":"ContainerStarted","Data":"3a94ea801a53a9cfb660a1aaa81a38ffb8f11fcb10ace5284d2f5d2d72eeb2ab"} Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.711746 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6754ff86f4-gttvc"] Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.720293 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-kube-api-access-27984" (OuterVolumeSpecName: "kube-api-access-27984") pod "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" (UID: "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2"). InnerVolumeSpecName "kube-api-access-27984". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.720620 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721035 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-log" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721052 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-log" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721070 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-httpd" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721075 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-httpd" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721083 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721089 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721097 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721104 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721116 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721122 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721140 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-log" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721146 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-log" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721156 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721162 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.721177 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-api" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721182 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-api" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721351 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-httpd" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721363 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d89443-6b8f-4757-951b-9b532755c158" containerName="glance-log" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721374 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-log" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721383 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721393 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721402 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" containerName="placement-api" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721412 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.721420 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" containerName="watcher-decision-engine" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.726542 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.731343 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.739868 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.739958 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-j65pr" podStartSLOduration=3.4878767809999998 podStartE2EDuration="13.739938495s" podCreationTimestamp="2026-02-17 20:28:26 +0000 UTC" firstStartedPulling="2026-02-17 20:28:28.746294773 +0000 UTC m=+1184.037993114" lastFinishedPulling="2026-02-17 20:28:38.998356507 +0000 UTC m=+1194.290054828" observedRunningTime="2026-02-17 20:28:39.73287476 +0000 UTC m=+1195.024573081" watchObservedRunningTime="2026-02-17 20:28:39.739938495 +0000 UTC m=+1195.031636806" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.740090 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.751436 4793 scope.go:117] "RemoveContainer" containerID="82c7dd80439b9e2588c2e0e26682e953de661e158ea27498460087c68f291070" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.764819 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" (UID: "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.768088 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" (UID: "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.795397 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-config-data" (OuterVolumeSpecName: "config-data") pod "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" (UID: "aedc67e8-05ec-44a4-b1f2-a18d2fde80b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.796710 4793 scope.go:117] "RemoveContainer" containerID="a159f40005f0fb7016e43bb8172d4d61432980ff231d5c4fdbf64d884b3e4d53" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.804996 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805093 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805141 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805200 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805275 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805516 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vfzc\" (UniqueName: \"kubernetes.io/projected/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-kube-api-access-8vfzc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805596 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805759 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805780 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805793 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27984\" (UniqueName: \"kubernetes.io/projected/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-kube-api-access-27984\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.805808 4793 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.845492 4793 scope.go:117] "RemoveContainer" containerID="ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.870900 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.907779 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.907834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.907876 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.907938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.907990 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.908014 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.908037 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vfzc\" (UniqueName: \"kubernetes.io/projected/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-kube-api-access-8vfzc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.908108 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.908214 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.911596 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.912039 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.912115 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.916039 4793 scope.go:117] "RemoveContainer" containerID="ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.916526 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.916607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.917561 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60\": container with ID starting with ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60 not found: ID does not exist" containerID="ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.917600 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60"} err="failed to get container status \"ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60\": rpc error: code = NotFound desc = could not find container \"ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60\": container with ID starting with ad0bb2533f61367434a0312bb6b3030d5fcb686e9704287777b19c5d0b7d7e60 not found: ID does not exist" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.917627 4793 scope.go:117] "RemoveContainer" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:39 crc kubenswrapper[4793]: E0217 20:28:39.917937 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700\": container with ID starting with 0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700 not found: ID does not exist" containerID="0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.917961 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700"} err="failed to get container status \"0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700\": rpc error: code = NotFound desc = could not find container \"0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700\": container with ID starting with 0916fc4505862a996052ca563d34794654dd2fa24df3e0ca3404479a8ca85700 not found: ID does not exist" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.919670 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.929968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vfzc\" (UniqueName: \"kubernetes.io/projected/8a5057fa-97b0-4d24-8002-e2a5b877ef4f-kube-api-access-8vfzc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:39 crc kubenswrapper[4793]: I0217 20:28:39.941014 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a5057fa-97b0-4d24-8002-e2a5b877ef4f\") " pod="openstack/glance-default-internal-api-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.030815 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.044403 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.052967 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.054266 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.056135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.065379 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.066873 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.216212 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.216556 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqlj\" (UniqueName: \"kubernetes.io/projected/ebd99c2f-5001-4865-966e-ea11d0dfc392-kube-api-access-ngqlj\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.216594 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.216632 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.218255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd99c2f-5001-4865-966e-ea11d0dfc392-logs\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.319781 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.319853 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngqlj\" (UniqueName: \"kubernetes.io/projected/ebd99c2f-5001-4865-966e-ea11d0dfc392-kube-api-access-ngqlj\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.319895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.319935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.319986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd99c2f-5001-4865-966e-ea11d0dfc392-logs\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.320479 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebd99c2f-5001-4865-966e-ea11d0dfc392-logs\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.325866 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.326288 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.330375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd99c2f-5001-4865-966e-ea11d0dfc392-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.343294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngqlj\" (UniqueName: \"kubernetes.io/projected/ebd99c2f-5001-4865-966e-ea11d0dfc392-kube-api-access-ngqlj\") pod \"watcher-decision-engine-0\" (UID: \"ebd99c2f-5001-4865-966e-ea11d0dfc392\") " pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.387566 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.576812 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.765446 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerStarted","Data":"be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a"} Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.765594 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-central-agent" containerID="cri-o://9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844" gracePeriod=30 Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.765837 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="proxy-httpd" containerID="cri-o://be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a" gracePeriod=30 Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.765862 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.765957 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="sg-core" containerID="cri-o://1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5" gracePeriod=30 Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.765999 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-notification-agent" containerID="cri-o://31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705" gracePeriod=30 Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.775483 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a5057fa-97b0-4d24-8002-e2a5b877ef4f","Type":"ContainerStarted","Data":"c45db522f22007132058476200b0ae984e37865da64d96f33ff436307e520328"} Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.785085 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.833945508 podStartE2EDuration="10.785065296s" podCreationTimestamp="2026-02-17 20:28:30 +0000 UTC" firstStartedPulling="2026-02-17 20:28:31.433333313 +0000 UTC m=+1186.725031624" lastFinishedPulling="2026-02-17 20:28:40.384453101 +0000 UTC m=+1195.676151412" observedRunningTime="2026-02-17 20:28:40.78442813 +0000 UTC m=+1196.076126441" watchObservedRunningTime="2026-02-17 20:28:40.785065296 +0000 UTC m=+1196.076763607" Feb 17 20:28:40 crc kubenswrapper[4793]: I0217 20:28:40.877420 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.353854 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.362147 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-log" containerID="cri-o://ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8" gracePeriod=30 Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.362723 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-httpd" containerID="cri-o://b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01" gracePeriod=30 Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.606199 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66324f49-be26-4cca-a237-cf6a31ab771f" path="/var/lib/kubelet/pods/66324f49-be26-4cca-a237-cf6a31ab771f/volumes" Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.607282 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d89443-6b8f-4757-951b-9b532755c158" path="/var/lib/kubelet/pods/74d89443-6b8f-4757-951b-9b532755c158/volumes" Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.608190 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aedc67e8-05ec-44a4-b1f2-a18d2fde80b2" path="/var/lib/kubelet/pods/aedc67e8-05ec-44a4-b1f2-a18d2fde80b2/volumes" Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.800900 4793 generic.go:334] "Generic (PLEG): container finished" podID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerID="ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8" exitCode=143 Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.800965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b181bfd-3989-4ba9-80b9-9b574ef0de20","Type":"ContainerDied","Data":"ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8"} Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.809986 4793 generic.go:334] "Generic (PLEG): container finished" podID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerID="be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a" exitCode=0 Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.810019 4793 generic.go:334] "Generic (PLEG): container finished" podID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerID="1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5" exitCode=2 Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.810067 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerDied","Data":"be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a"} Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.810092 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerDied","Data":"1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5"} Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.812039 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a5057fa-97b0-4d24-8002-e2a5b877ef4f","Type":"ContainerStarted","Data":"77d67ff90beecc8c6e2024accc0645b335d3879a0042fe11b0cd7c6a49725957"} Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.814882 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebd99c2f-5001-4865-966e-ea11d0dfc392","Type":"ContainerStarted","Data":"0d9aeddd0b203d882441b32cbfc820066f9d5c12c0c87fc61ffc0076dbdfa59f"} Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.814930 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebd99c2f-5001-4865-966e-ea11d0dfc392","Type":"ContainerStarted","Data":"dbe3eb3510030d7bc37b3a16e6e7a43a3603b7cebda72c2280ce16f900b399e4"} Feb 17 20:28:41 crc kubenswrapper[4793]: I0217 20:28:41.845564 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=1.845541938 podStartE2EDuration="1.845541938s" podCreationTimestamp="2026-02-17 20:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:41.842080212 +0000 UTC m=+1197.133778523" watchObservedRunningTime="2026-02-17 20:28:41.845541938 +0000 UTC m=+1197.137240249" Feb 17 20:28:42 crc kubenswrapper[4793]: I0217 20:28:42.825601 4793 generic.go:334] "Generic (PLEG): container finished" podID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerID="31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705" exitCode=0 Feb 17 20:28:42 crc kubenswrapper[4793]: I0217 20:28:42.826860 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerDied","Data":"31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705"} Feb 17 20:28:42 crc kubenswrapper[4793]: I0217 20:28:42.831911 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a5057fa-97b0-4d24-8002-e2a5b877ef4f","Type":"ContainerStarted","Data":"06e58e901cf723b7572b2a874e8901a256c4d15cc442a2d4baf972d82a6e3a62"} Feb 17 20:28:42 crc kubenswrapper[4793]: I0217 20:28:42.859043 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.859023441 podStartE2EDuration="3.859023441s" podCreationTimestamp="2026-02-17 20:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:42.847574117 +0000 UTC m=+1198.139272428" watchObservedRunningTime="2026-02-17 20:28:42.859023441 +0000 UTC m=+1198.150721752" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.535791 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618308 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-config-data\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618382 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-public-tls-certs\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618435 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-logs\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618471 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-scripts\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-combined-ca-bundle\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618575 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618627 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-httpd-run\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.618679 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvbw5\" (UniqueName: \"kubernetes.io/projected/6b181bfd-3989-4ba9-80b9-9b574ef0de20-kube-api-access-jvbw5\") pod \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\" (UID: \"6b181bfd-3989-4ba9-80b9-9b574ef0de20\") " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.619651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-logs" (OuterVolumeSpecName: "logs") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.620572 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.624458 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-scripts" (OuterVolumeSpecName: "scripts") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.624584 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b181bfd-3989-4ba9-80b9-9b574ef0de20-kube-api-access-jvbw5" (OuterVolumeSpecName: "kube-api-access-jvbw5") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "kube-api-access-jvbw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.643939 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.694034 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.718711 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721413 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721440 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721450 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721458 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721488 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721498 4793 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b181bfd-3989-4ba9-80b9-9b574ef0de20-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.721506 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvbw5\" (UniqueName: \"kubernetes.io/projected/6b181bfd-3989-4ba9-80b9-9b574ef0de20-kube-api-access-jvbw5\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.744109 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-config-data" (OuterVolumeSpecName: "config-data") pod "6b181bfd-3989-4ba9-80b9-9b574ef0de20" (UID: "6b181bfd-3989-4ba9-80b9-9b574ef0de20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.744371 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.823403 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b181bfd-3989-4ba9-80b9-9b574ef0de20-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.823439 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.844758 4793 generic.go:334] "Generic (PLEG): container finished" podID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerID="b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01" exitCode=0 Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.844794 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b181bfd-3989-4ba9-80b9-9b574ef0de20","Type":"ContainerDied","Data":"b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01"} Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.844834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6b181bfd-3989-4ba9-80b9-9b574ef0de20","Type":"ContainerDied","Data":"f3211b4d13154e47a34665f25c5599cc379e9c8fbb952b3d65b072de67a4d1bf"} Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.844856 4793 scope.go:117] "RemoveContainer" containerID="b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.844776 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.876177 4793 scope.go:117] "RemoveContainer" containerID="ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.895227 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.905640 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.907888 4793 scope.go:117] "RemoveContainer" containerID="b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01" Feb 17 20:28:43 crc kubenswrapper[4793]: E0217 20:28:43.910449 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01\": container with ID starting with b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01 not found: ID does not exist" containerID="b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.910483 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01"} err="failed to get container status \"b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01\": rpc error: code = NotFound desc = could not find container \"b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01\": container with ID starting with b19191b5afa7eb3509dcd325da32aa21a18dfff9f48761377df1a265b9465a01 not found: ID does not exist" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.910507 4793 scope.go:117] "RemoveContainer" containerID="ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8" Feb 17 20:28:43 crc kubenswrapper[4793]: E0217 20:28:43.911002 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8\": container with ID starting with ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8 not found: ID does not exist" containerID="ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.911023 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8"} err="failed to get container status \"ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8\": rpc error: code = NotFound desc = could not find container \"ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8\": container with ID starting with ddf5a568baf3d455724f31b21d678360d213ce22a3605234ffa2c1973876fca8 not found: ID does not exist" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.934156 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:28:43 crc kubenswrapper[4793]: E0217 20:28:43.934569 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-log" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.934588 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-log" Feb 17 20:28:43 crc kubenswrapper[4793]: E0217 20:28:43.934619 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-httpd" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.934626 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-httpd" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.934828 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-log" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.934854 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" containerName="glance-httpd" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.935836 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.939413 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.939512 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 20:28:43 crc kubenswrapper[4793]: I0217 20:28:43.972855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027108 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027175 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027495 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqwk\" (UniqueName: \"kubernetes.io/projected/10956ec3-bc51-4d38-82ad-71a60bcf30db-kube-api-access-nsqwk\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027549 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027630 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10956ec3-bc51-4d38-82ad-71a60bcf30db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027706 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-scripts\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027736 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10956ec3-bc51-4d38-82ad-71a60bcf30db-logs\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.027796 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-config-data\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.129799 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.129867 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.129969 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqwk\" (UniqueName: \"kubernetes.io/projected/10956ec3-bc51-4d38-82ad-71a60bcf30db-kube-api-access-nsqwk\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.129989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.130018 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10956ec3-bc51-4d38-82ad-71a60bcf30db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.130044 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-scripts\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.130060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10956ec3-bc51-4d38-82ad-71a60bcf30db-logs\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.130085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-config-data\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.131030 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.132114 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10956ec3-bc51-4d38-82ad-71a60bcf30db-logs\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.132917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10956ec3-bc51-4d38-82ad-71a60bcf30db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.134680 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.135122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-config-data\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.137094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.137653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10956ec3-bc51-4d38-82ad-71a60bcf30db-scripts\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.151317 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqwk\" (UniqueName: \"kubernetes.io/projected/10956ec3-bc51-4d38-82ad-71a60bcf30db-kube-api-access-nsqwk\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.183304 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"10956ec3-bc51-4d38-82ad-71a60bcf30db\") " pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.253426 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.383743 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-sg-core-conf-yaml\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543597 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxrsv\" (UniqueName: \"kubernetes.io/projected/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-kube-api-access-wxrsv\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543700 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-config-data\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543726 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-run-httpd\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543772 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-log-httpd\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-combined-ca-bundle\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.543954 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-scripts\") pod \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\" (UID: \"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c\") " Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.544239 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.544367 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.544558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.551172 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-kube-api-access-wxrsv" (OuterVolumeSpecName: "kube-api-access-wxrsv") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "kube-api-access-wxrsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.553880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-scripts" (OuterVolumeSpecName: "scripts") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.621043 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.648971 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.649015 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.649025 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxrsv\" (UniqueName: \"kubernetes.io/projected/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-kube-api-access-wxrsv\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.649033 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.675099 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.682979 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-config-data" (OuterVolumeSpecName: "config-data") pod "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" (UID: "e6b72cdd-33e1-4a5b-89a4-a05521d47d1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.751074 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.751103 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.829717 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 20:28:44 crc kubenswrapper[4793]: W0217 20:28:44.831082 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10956ec3_bc51_4d38_82ad_71a60bcf30db.slice/crio-823a3f11fa1b61313a592646a720053f2af3fd2e201cbdc1adbf0627cbff0ca3 WatchSource:0}: Error finding container 823a3f11fa1b61313a592646a720053f2af3fd2e201cbdc1adbf0627cbff0ca3: Status 404 returned error can't find the container with id 823a3f11fa1b61313a592646a720053f2af3fd2e201cbdc1adbf0627cbff0ca3 Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.862512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10956ec3-bc51-4d38-82ad-71a60bcf30db","Type":"ContainerStarted","Data":"823a3f11fa1b61313a592646a720053f2af3fd2e201cbdc1adbf0627cbff0ca3"} Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.871128 4793 generic.go:334] "Generic (PLEG): container finished" podID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerID="9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844" exitCode=0 Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.871176 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerDied","Data":"9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844"} Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.871220 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6b72cdd-33e1-4a5b-89a4-a05521d47d1c","Type":"ContainerDied","Data":"7f7bba5b292570099eb8a130a3cdbd8bcc0118142e839d51bc821c89ad572141"} Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.871239 4793 scope.go:117] "RemoveContainer" containerID="be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.871292 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.908274 4793 scope.go:117] "RemoveContainer" containerID="1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.911493 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.922499 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.943554 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:44 crc kubenswrapper[4793]: E0217 20:28:44.944123 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-central-agent" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944142 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-central-agent" Feb 17 20:28:44 crc kubenswrapper[4793]: E0217 20:28:44.944157 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="sg-core" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944164 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="sg-core" Feb 17 20:28:44 crc kubenswrapper[4793]: E0217 20:28:44.944187 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="proxy-httpd" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944194 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="proxy-httpd" Feb 17 20:28:44 crc kubenswrapper[4793]: E0217 20:28:44.944211 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-notification-agent" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944218 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-notification-agent" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944414 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-central-agent" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944433 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="ceilometer-notification-agent" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944443 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="proxy-httpd" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944466 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" containerName="sg-core" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.944669 4793 scope.go:117] "RemoveContainer" containerID="31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.946440 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.950532 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.950728 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.977925 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:44 crc kubenswrapper[4793]: I0217 20:28:44.991852 4793 scope.go:117] "RemoveContainer" containerID="9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.090168 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rjk\" (UniqueName: \"kubernetes.io/projected/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-kube-api-access-d5rjk\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.090481 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-run-httpd\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.090575 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.090782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.090803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-config-data\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.090974 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-scripts\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.091037 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-log-httpd\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.091247 4793 scope.go:117] "RemoveContainer" containerID="be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a" Feb 17 20:28:45 crc kubenswrapper[4793]: E0217 20:28:45.109846 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a\": container with ID starting with be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a not found: ID does not exist" containerID="be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.109942 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a"} err="failed to get container status \"be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a\": rpc error: code = NotFound desc = could not find container \"be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a\": container with ID starting with be8de4b974767f3407a5d1a1393ee6cb6a808f37f7d9617a695983822640868a not found: ID does not exist" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.109966 4793 scope.go:117] "RemoveContainer" containerID="1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5" Feb 17 20:28:45 crc kubenswrapper[4793]: E0217 20:28:45.110874 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5\": container with ID starting with 1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5 not found: ID does not exist" containerID="1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.110898 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5"} err="failed to get container status \"1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5\": rpc error: code = NotFound desc = could not find container \"1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5\": container with ID starting with 1783f95cd8e2f542d616f04a7849e553b7ef5000634bd76731b4e0e7753913d5 not found: ID does not exist" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.110926 4793 scope.go:117] "RemoveContainer" containerID="31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705" Feb 17 20:28:45 crc kubenswrapper[4793]: E0217 20:28:45.114842 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705\": container with ID starting with 31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705 not found: ID does not exist" containerID="31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.114898 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705"} err="failed to get container status \"31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705\": rpc error: code = NotFound desc = could not find container \"31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705\": container with ID starting with 31907f3db8c5d1318631caa256277e411b7e20dc764e6d335d69e9aeeb317705 not found: ID does not exist" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.114924 4793 scope.go:117] "RemoveContainer" containerID="9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844" Feb 17 20:28:45 crc kubenswrapper[4793]: E0217 20:28:45.121223 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844\": container with ID starting with 9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844 not found: ID does not exist" containerID="9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.121277 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844"} err="failed to get container status \"9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844\": rpc error: code = NotFound desc = could not find container \"9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844\": container with ID starting with 9abe649cec5c6fc5694f54caf896c169b6c6807e018b1d98b61a7685b8762844 not found: ID does not exist" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.192977 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.193012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-config-data\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.193055 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-scripts\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.193076 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-log-httpd\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.193110 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rjk\" (UniqueName: \"kubernetes.io/projected/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-kube-api-access-d5rjk\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.193132 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-run-httpd\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.193173 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.194032 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-run-httpd\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.194140 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-log-httpd\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.199478 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.199782 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-config-data\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.199782 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-scripts\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.209021 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.212373 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rjk\" (UniqueName: \"kubernetes.io/projected/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-kube-api-access-d5rjk\") pod \"ceilometer-0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.285135 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.566128 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b181bfd-3989-4ba9-80b9-9b574ef0de20" path="/var/lib/kubelet/pods/6b181bfd-3989-4ba9-80b9-9b574ef0de20/volumes" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.567446 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b72cdd-33e1-4a5b-89a4-a05521d47d1c" path="/var/lib/kubelet/pods/e6b72cdd-33e1-4a5b-89a4-a05521d47d1c/volumes" Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.789106 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:45 crc kubenswrapper[4793]: W0217 20:28:45.797444 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode301a07d_e0b6_4fb8_acf5_5e0025cb4bb0.slice/crio-b4f67de4dfcdd3c953833b81e5e4ed4cb4ebb31e48e6338697f876f4537ab2b4 WatchSource:0}: Error finding container b4f67de4dfcdd3c953833b81e5e4ed4cb4ebb31e48e6338697f876f4537ab2b4: Status 404 returned error can't find the container with id b4f67de4dfcdd3c953833b81e5e4ed4cb4ebb31e48e6338697f876f4537ab2b4 Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.882301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10956ec3-bc51-4d38-82ad-71a60bcf30db","Type":"ContainerStarted","Data":"ca453d9b1b105c3d4db4f8019195b4140cd2071b7ae128c56de9836bcb836ca4"} Feb 17 20:28:45 crc kubenswrapper[4793]: I0217 20:28:45.884535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerStarted","Data":"b4f67de4dfcdd3c953833b81e5e4ed4cb4ebb31e48e6338697f876f4537ab2b4"} Feb 17 20:28:46 crc kubenswrapper[4793]: I0217 20:28:46.899248 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerStarted","Data":"f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9"} Feb 17 20:28:46 crc kubenswrapper[4793]: I0217 20:28:46.899875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerStarted","Data":"81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b"} Feb 17 20:28:46 crc kubenswrapper[4793]: I0217 20:28:46.901771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"10956ec3-bc51-4d38-82ad-71a60bcf30db","Type":"ContainerStarted","Data":"db03778a00584689501209cacec39065ff8883860c2857c968bdbe9904a55031"} Feb 17 20:28:46 crc kubenswrapper[4793]: I0217 20:28:46.936561 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.936539544 podStartE2EDuration="3.936539544s" podCreationTimestamp="2026-02-17 20:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:46.93397334 +0000 UTC m=+1202.225671701" watchObservedRunningTime="2026-02-17 20:28:46.936539544 +0000 UTC m=+1202.228237845" Feb 17 20:28:47 crc kubenswrapper[4793]: I0217 20:28:47.538817 4793 scope.go:117] "RemoveContainer" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" Feb 17 20:28:47 crc kubenswrapper[4793]: E0217 20:28:47.539163 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:28:47 crc kubenswrapper[4793]: I0217 20:28:47.912763 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerStarted","Data":"31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963"} Feb 17 20:28:48 crc kubenswrapper[4793]: I0217 20:28:48.924005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerStarted","Data":"83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931"} Feb 17 20:28:48 crc kubenswrapper[4793]: I0217 20:28:48.925485 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:28:48 crc kubenswrapper[4793]: I0217 20:28:48.945547 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.753411553 podStartE2EDuration="4.945523866s" podCreationTimestamp="2026-02-17 20:28:44 +0000 UTC" firstStartedPulling="2026-02-17 20:28:45.799802877 +0000 UTC m=+1201.091501188" lastFinishedPulling="2026-02-17 20:28:47.99191519 +0000 UTC m=+1203.283613501" observedRunningTime="2026-02-17 20:28:48.9412641 +0000 UTC m=+1204.232962421" watchObservedRunningTime="2026-02-17 20:28:48.945523866 +0000 UTC m=+1204.237222177" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.053665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.053722 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.088206 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.098500 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.388760 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.415444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.942281 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.944042 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.944072 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:50 crc kubenswrapper[4793]: I0217 20:28:50.979375 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 17 20:28:51 crc kubenswrapper[4793]: I0217 20:28:51.296566 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:51 crc kubenswrapper[4793]: I0217 20:28:51.952373 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-central-agent" containerID="cri-o://81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b" gracePeriod=30 Feb 17 20:28:51 crc kubenswrapper[4793]: I0217 20:28:51.952909 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="sg-core" containerID="cri-o://31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963" gracePeriod=30 Feb 17 20:28:51 crc kubenswrapper[4793]: I0217 20:28:51.952938 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="proxy-httpd" containerID="cri-o://83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931" gracePeriod=30 Feb 17 20:28:51 crc kubenswrapper[4793]: I0217 20:28:51.952980 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-notification-agent" containerID="cri-o://f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9" gracePeriod=30 Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.788562 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.793285 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.969181 4793 generic.go:334] "Generic (PLEG): container finished" podID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerID="83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931" exitCode=0 Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.969212 4793 generic.go:334] "Generic (PLEG): container finished" podID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerID="31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963" exitCode=2 Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.969219 4793 generic.go:334] "Generic (PLEG): container finished" podID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerID="f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9" exitCode=0 Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.969272 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerDied","Data":"83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931"} Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.969350 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerDied","Data":"31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963"} Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.969366 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerDied","Data":"f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9"} Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.971301 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7625c8d-9409-461d-98e0-2a9507baa803" containerID="3a94ea801a53a9cfb660a1aaa81a38ffb8f11fcb10ace5284d2f5d2d72eeb2ab" exitCode=0 Feb 17 20:28:52 crc kubenswrapper[4793]: I0217 20:28:52.971387 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j65pr" event={"ID":"c7625c8d-9409-461d-98e0-2a9507baa803","Type":"ContainerDied","Data":"3a94ea801a53a9cfb660a1aaa81a38ffb8f11fcb10ace5284d2f5d2d72eeb2ab"} Feb 17 20:28:53 crc kubenswrapper[4793]: I0217 20:28:53.819963 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:53 crc kubenswrapper[4793]: I0217 20:28:53.983373 4793 generic.go:334] "Generic (PLEG): container finished" podID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerID="81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b" exitCode=0 Feb 17 20:28:53 crc kubenswrapper[4793]: I0217 20:28:53.983506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerDied","Data":"81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b"} Feb 17 20:28:53 crc kubenswrapper[4793]: I0217 20:28:53.983560 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0","Type":"ContainerDied","Data":"b4f67de4dfcdd3c953833b81e5e4ed4cb4ebb31e48e6338697f876f4537ab2b4"} Feb 17 20:28:53 crc kubenswrapper[4793]: I0217 20:28:53.983583 4793 scope.go:117] "RemoveContainer" containerID="83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931" Feb 17 20:28:53 crc kubenswrapper[4793]: I0217 20:28:53.983626 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010260 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-sg-core-conf-yaml\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010348 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-run-httpd\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010374 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5rjk\" (UniqueName: \"kubernetes.io/projected/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-kube-api-access-d5rjk\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010407 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-combined-ca-bundle\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010473 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-config-data\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010487 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-log-httpd\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.010502 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-scripts\") pod \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\" (UID: \"e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.013972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.014301 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.016083 4793 scope.go:117] "RemoveContainer" containerID="31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.017539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-kube-api-access-d5rjk" (OuterVolumeSpecName: "kube-api-access-d5rjk") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "kube-api-access-d5rjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.019311 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-scripts" (OuterVolumeSpecName: "scripts") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.049118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.116569 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.116711 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.116726 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.116738 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.116748 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5rjk\" (UniqueName: \"kubernetes.io/projected/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-kube-api-access-d5rjk\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.157817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.179399 4793 scope.go:117] "RemoveContainer" containerID="f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.187152 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-config-data" (OuterVolumeSpecName: "config-data") pod "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" (UID: "e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.218544 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.218964 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.230315 4793 scope.go:117] "RemoveContainer" containerID="81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.254925 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.254984 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.258884 4793 scope.go:117] "RemoveContainer" containerID="83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.259338 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931\": container with ID starting with 83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931 not found: ID does not exist" containerID="83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.259383 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931"} err="failed to get container status \"83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931\": rpc error: code = NotFound desc = could not find container \"83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931\": container with ID starting with 83172b0e0398abaec1e5a3b3bef79a3786c5de0631b8e77b3d3d8a2728ec4931 not found: ID does not exist" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.259413 4793 scope.go:117] "RemoveContainer" containerID="31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.259841 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963\": container with ID starting with 31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963 not found: ID does not exist" containerID="31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.259866 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963"} err="failed to get container status \"31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963\": rpc error: code = NotFound desc = could not find container \"31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963\": container with ID starting with 31fce51f5fbe0d3afe46ff8d13a86343b528960f53127f7d09582e36e043b963 not found: ID does not exist" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.259882 4793 scope.go:117] "RemoveContainer" containerID="f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.260305 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9\": container with ID starting with f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9 not found: ID does not exist" containerID="f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.260328 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9"} err="failed to get container status \"f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9\": rpc error: code = NotFound desc = could not find container \"f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9\": container with ID starting with f167163fc24c6879110f5d77dd8b4bf0e7ff2ac5ef0fb43405fba7b6e5b588c9 not found: ID does not exist" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.260343 4793 scope.go:117] "RemoveContainer" containerID="81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.260705 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b\": container with ID starting with 81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b not found: ID does not exist" containerID="81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.260750 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b"} err="failed to get container status \"81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b\": rpc error: code = NotFound desc = could not find container \"81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b\": container with ID starting with 81d17a7837d6383afea6113f03f7cbca8fea74b1a6d3d8f7f13a18b2b4fd9e9b not found: ID does not exist" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.272009 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.295558 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.302290 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.370001 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.386731 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.407754 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.408277 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7625c8d-9409-461d-98e0-2a9507baa803" containerName="nova-cell0-conductor-db-sync" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408297 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7625c8d-9409-461d-98e0-2a9507baa803" containerName="nova-cell0-conductor-db-sync" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.408324 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-central-agent" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408335 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-central-agent" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.408351 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="proxy-httpd" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408359 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="proxy-httpd" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.408378 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="sg-core" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408389 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="sg-core" Feb 17 20:28:54 crc kubenswrapper[4793]: E0217 20:28:54.408413 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-notification-agent" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408422 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-notification-agent" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408678 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="sg-core" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408715 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-central-agent" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408736 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="proxy-httpd" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408755 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7625c8d-9409-461d-98e0-2a9507baa803" containerName="nova-cell0-conductor-db-sync" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.408774 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" containerName="ceilometer-notification-agent" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.411113 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.412848 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.413172 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.417387 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.422345 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b55mm\" (UniqueName: \"kubernetes.io/projected/c7625c8d-9409-461d-98e0-2a9507baa803-kube-api-access-b55mm\") pod \"c7625c8d-9409-461d-98e0-2a9507baa803\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.422511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-combined-ca-bundle\") pod \"c7625c8d-9409-461d-98e0-2a9507baa803\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.422673 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts\") pod \"c7625c8d-9409-461d-98e0-2a9507baa803\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.423289 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-config-data\") pod \"c7625c8d-9409-461d-98e0-2a9507baa803\" (UID: \"c7625c8d-9409-461d-98e0-2a9507baa803\") " Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.432539 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7625c8d-9409-461d-98e0-2a9507baa803-kube-api-access-b55mm" (OuterVolumeSpecName: "kube-api-access-b55mm") pod "c7625c8d-9409-461d-98e0-2a9507baa803" (UID: "c7625c8d-9409-461d-98e0-2a9507baa803"). InnerVolumeSpecName "kube-api-access-b55mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.437620 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts" (OuterVolumeSpecName: "scripts") pod "c7625c8d-9409-461d-98e0-2a9507baa803" (UID: "c7625c8d-9409-461d-98e0-2a9507baa803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.472803 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7625c8d-9409-461d-98e0-2a9507baa803" (UID: "c7625c8d-9409-461d-98e0-2a9507baa803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.477017 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-config-data" (OuterVolumeSpecName: "config-data") pod "c7625c8d-9409-461d-98e0-2a9507baa803" (UID: "c7625c8d-9409-461d-98e0-2a9507baa803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.525758 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-scripts\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.525850 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-run-httpd\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526067 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-config-data\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526125 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526307 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxwp\" (UniqueName: \"kubernetes.io/projected/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-kube-api-access-ppxwp\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526375 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-log-httpd\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526469 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526492 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526503 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b55mm\" (UniqueName: \"kubernetes.io/projected/c7625c8d-9409-461d-98e0-2a9507baa803-kube-api-access-b55mm\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.526512 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7625c8d-9409-461d-98e0-2a9507baa803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628080 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-run-httpd\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628192 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-config-data\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628211 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppxwp\" (UniqueName: \"kubernetes.io/projected/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-kube-api-access-ppxwp\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628301 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628316 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-log-httpd\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.628346 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-scripts\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.629205 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-log-httpd\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.629627 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-run-httpd\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.632313 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-scripts\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.633626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.633817 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-config-data\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.645167 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.647673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppxwp\" (UniqueName: \"kubernetes.io/projected/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-kube-api-access-ppxwp\") pod \"ceilometer-0\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " pod="openstack/ceilometer-0" Feb 17 20:28:54 crc kubenswrapper[4793]: I0217 20:28:54.863934 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:54.999203 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j65pr" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.017749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j65pr" event={"ID":"c7625c8d-9409-461d-98e0-2a9507baa803","Type":"ContainerDied","Data":"f82555372a6b1021af09a029ff6fd2328a60049591d1a51c426917a124e00b99"} Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.017795 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82555372a6b1021af09a029ff6fd2328a60049591d1a51c426917a124e00b99" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.017823 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.018272 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.136598 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.137794 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.142165 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m89zr" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.142278 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.146912 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.240890 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b77f41b-340b-4eed-a38a-cc9dac77786f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.241197 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjgq\" (UniqueName: \"kubernetes.io/projected/3b77f41b-340b-4eed-a38a-cc9dac77786f-kube-api-access-rbjgq\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.241235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b77f41b-340b-4eed-a38a-cc9dac77786f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.342764 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjgq\" (UniqueName: \"kubernetes.io/projected/3b77f41b-340b-4eed-a38a-cc9dac77786f-kube-api-access-rbjgq\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.343569 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b77f41b-340b-4eed-a38a-cc9dac77786f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.343945 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b77f41b-340b-4eed-a38a-cc9dac77786f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.353420 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b77f41b-340b-4eed-a38a-cc9dac77786f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.354262 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b77f41b-340b-4eed-a38a-cc9dac77786f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.394400 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjgq\" (UniqueName: \"kubernetes.io/projected/3b77f41b-340b-4eed-a38a-cc9dac77786f-kube-api-access-rbjgq\") pod \"nova-cell0-conductor-0\" (UID: \"3b77f41b-340b-4eed-a38a-cc9dac77786f\") " pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.463179 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.495564 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:28:55 crc kubenswrapper[4793]: W0217 20:28:55.524939 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded59c12a_71e1_4c06_a7c7_a6fbb904afb3.slice/crio-704c49f8e047b3c360458475c37f35e4cf47f8374fbbe30c1db83da35a516104 WatchSource:0}: Error finding container 704c49f8e047b3c360458475c37f35e4cf47f8374fbbe30c1db83da35a516104: Status 404 returned error can't find the container with id 704c49f8e047b3c360458475c37f35e4cf47f8374fbbe30c1db83da35a516104 Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.564807 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0" path="/var/lib/kubelet/pods/e301a07d-e0b6-4fb8-acf5-5e0025cb4bb0/volumes" Feb 17 20:28:55 crc kubenswrapper[4793]: I0217 20:28:55.816562 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 20:28:55 crc kubenswrapper[4793]: W0217 20:28:55.822158 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b77f41b_340b_4eed_a38a_cc9dac77786f.slice/crio-8081a35df1de93971baf7a6d197426891939e0468ad35bd1605959492f983816 WatchSource:0}: Error finding container 8081a35df1de93971baf7a6d197426891939e0468ad35bd1605959492f983816: Status 404 returned error can't find the container with id 8081a35df1de93971baf7a6d197426891939e0468ad35bd1605959492f983816 Feb 17 20:28:56 crc kubenswrapper[4793]: I0217 20:28:56.018178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerStarted","Data":"32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2"} Feb 17 20:28:56 crc kubenswrapper[4793]: I0217 20:28:56.018430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerStarted","Data":"704c49f8e047b3c360458475c37f35e4cf47f8374fbbe30c1db83da35a516104"} Feb 17 20:28:56 crc kubenswrapper[4793]: I0217 20:28:56.032872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3b77f41b-340b-4eed-a38a-cc9dac77786f","Type":"ContainerStarted","Data":"8081a35df1de93971baf7a6d197426891939e0468ad35bd1605959492f983816"} Feb 17 20:28:56 crc kubenswrapper[4793]: I0217 20:28:56.033829 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 20:28:56 crc kubenswrapper[4793]: I0217 20:28:56.053779 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.053757479 podStartE2EDuration="1.053757479s" podCreationTimestamp="2026-02-17 20:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:28:56.046452928 +0000 UTC m=+1211.338151239" watchObservedRunningTime="2026-02-17 20:28:56.053757479 +0000 UTC m=+1211.345455790" Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.043729 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerStarted","Data":"a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad"} Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.044301 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerStarted","Data":"886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2"} Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.045631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3b77f41b-340b-4eed-a38a-cc9dac77786f","Type":"ContainerStarted","Data":"cf595bff5584ea7e733d39640b98e849310418272c5c577778e61f64f8396bf1"} Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.045647 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.045702 4793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.178587 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 20:28:57 crc kubenswrapper[4793]: I0217 20:28:57.262438 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 20:28:59 crc kubenswrapper[4793]: I0217 20:28:59.072104 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerStarted","Data":"a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33"} Feb 17 20:28:59 crc kubenswrapper[4793]: I0217 20:28:59.097837 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.566508451 podStartE2EDuration="5.097816651s" podCreationTimestamp="2026-02-17 20:28:54 +0000 UTC" firstStartedPulling="2026-02-17 20:28:55.550917584 +0000 UTC m=+1210.842615895" lastFinishedPulling="2026-02-17 20:28:58.082225784 +0000 UTC m=+1213.373924095" observedRunningTime="2026-02-17 20:28:59.08892015 +0000 UTC m=+1214.380618471" watchObservedRunningTime="2026-02-17 20:28:59.097816651 +0000 UTC m=+1214.389514972" Feb 17 20:29:00 crc kubenswrapper[4793]: I0217 20:29:00.087222 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:29:02 crc kubenswrapper[4793]: I0217 20:29:02.539325 4793 scope.go:117] "RemoveContainer" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" Feb 17 20:29:03 crc kubenswrapper[4793]: I0217 20:29:03.123283 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e"} Feb 17 20:29:05 crc kubenswrapper[4793]: I0217 20:29:05.508142 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 20:29:05 crc kubenswrapper[4793]: I0217 20:29:05.991858 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mz7ss"] Feb 17 20:29:05 crc kubenswrapper[4793]: I0217 20:29:05.993293 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:05 crc kubenswrapper[4793]: I0217 20:29:05.995759 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 20:29:05 crc kubenswrapper[4793]: I0217 20:29:05.996295 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.003775 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz7ss"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.089951 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22g4\" (UniqueName: \"kubernetes.io/projected/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-kube-api-access-v22g4\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.090001 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-scripts\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.090022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-config-data\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.091085 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.170266 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" exitCode=1 Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.170310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e"} Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.170341 4793 scope.go:117] "RemoveContainer" containerID="8642250fd6494650238f39fe809ae38a4813c6fb101ece6c7d92ea9166b43afe" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.170995 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:06 crc kubenswrapper[4793]: E0217 20:29:06.171235 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.192520 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.192698 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v22g4\" (UniqueName: \"kubernetes.io/projected/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-kube-api-access-v22g4\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.192725 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-scripts\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.192746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-config-data\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.194513 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.195992 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.203709 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-scripts\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.208037 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-config-data\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.209265 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.223111 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.229057 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.300341 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.300384 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8bb\" (UniqueName: \"kubernetes.io/projected/6e2bb214-3f4a-45a9-bdc5-01d75750d494-kube-api-access-kw8bb\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.300467 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.309299 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v22g4\" (UniqueName: \"kubernetes.io/projected/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-kube-api-access-v22g4\") pod \"nova-cell0-cell-mapping-mz7ss\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.311593 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.324597 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.326420 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.342001 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.342730 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.382335 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.383473 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.403131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404498 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wl94\" (UniqueName: \"kubernetes.io/projected/5fa127ec-e778-4eb1-b65d-477e849110ca-kube-api-access-9wl94\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404644 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8bb\" (UniqueName: \"kubernetes.io/projected/6e2bb214-3f4a-45a9-bdc5-01d75750d494-kube-api-access-kw8bb\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.404738 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fa127ec-e778-4eb1-b65d-477e849110ca-logs\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.423064 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.436476 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.467468 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.467465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8bb\" (UniqueName: \"kubernetes.io/projected/6e2bb214-3f4a-45a9-bdc5-01d75750d494-kube-api-access-kw8bb\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.481905 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508328 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvck8\" (UniqueName: \"kubernetes.io/projected/152dd5ef-8cba-4731-9eec-8803f3785679-kube-api-access-wvck8\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508416 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-config-data\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fa127ec-e778-4eb1-b65d-477e849110ca-logs\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508506 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508547 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wl94\" (UniqueName: \"kubernetes.io/projected/5fa127ec-e778-4eb1-b65d-477e849110ca-kube-api-access-9wl94\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.508598 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.509224 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fa127ec-e778-4eb1-b65d-477e849110ca-logs\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.515936 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.531549 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.555187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wl94\" (UniqueName: \"kubernetes.io/projected/5fa127ec-e778-4eb1-b65d-477e849110ca-kube-api-access-9wl94\") pod \"nova-api-0\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.568580 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.571249 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.579292 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.591073 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.612743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.612835 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6dgk\" (UniqueName: \"kubernetes.io/projected/4ba22f5c-6d29-4c7d-b560-79a98ca81234-kube-api-access-r6dgk\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.612864 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.612909 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvck8\" (UniqueName: \"kubernetes.io/projected/152dd5ef-8cba-4731-9eec-8803f3785679-kube-api-access-wvck8\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.612973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-config-data\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.612993 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba22f5c-6d29-4c7d-b560-79a98ca81234-logs\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.613054 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-config-data\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.618837 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68fc58f487-92zf8"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.621340 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.629471 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68fc58f487-92zf8"] Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.635136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-config-data\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.646828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvck8\" (UniqueName: \"kubernetes.io/projected/152dd5ef-8cba-4731-9eec-8803f3785679-kube-api-access-wvck8\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.649007 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.726680 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-swift-storage-0\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba22f5c-6d29-4c7d-b560-79a98ca81234-logs\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727480 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-svc\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc2d8\" (UniqueName: \"kubernetes.io/projected/9ff06b45-da92-4c62-a974-7a51d30a16ed-kube-api-access-sc2d8\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727602 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-config-data\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727633 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-config\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727725 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-nb\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.727760 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-sb\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.729417 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6dgk\" (UniqueName: \"kubernetes.io/projected/4ba22f5c-6d29-4c7d-b560-79a98ca81234-kube-api-access-r6dgk\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.729526 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.731187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba22f5c-6d29-4c7d-b560-79a98ca81234-logs\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.737260 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.737772 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-config-data\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.749621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6dgk\" (UniqueName: \"kubernetes.io/projected/4ba22f5c-6d29-4c7d-b560-79a98ca81234-kube-api-access-r6dgk\") pod \"nova-metadata-0\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.832323 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-swift-storage-0\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.832405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-svc\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.832440 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc2d8\" (UniqueName: \"kubernetes.io/projected/9ff06b45-da92-4c62-a974-7a51d30a16ed-kube-api-access-sc2d8\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.832515 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-config\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.832570 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-nb\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.832605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-sb\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.833504 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-svc\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.833521 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-sb\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.833501 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-swift-storage-0\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.833970 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-nb\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.834281 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-config\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.837183 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.854225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc2d8\" (UniqueName: \"kubernetes.io/projected/9ff06b45-da92-4c62-a974-7a51d30a16ed-kube-api-access-sc2d8\") pod \"dnsmasq-dns-68fc58f487-92zf8\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.873231 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.893489 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.954340 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:06 crc kubenswrapper[4793]: I0217 20:29:06.963189 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.016397 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz7ss"] Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.190525 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.194430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz7ss" event={"ID":"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7","Type":"ContainerStarted","Data":"34d13ce25b4b0f124f85aa9888040de8403771e4741010490dfb0051cc13a6a9"} Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.196544 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:07 crc kubenswrapper[4793]: E0217 20:29:07.196751 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.442212 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:07 crc kubenswrapper[4793]: W0217 20:29:07.804951 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ba22f5c_6d29_4c7d_b560_79a98ca81234.slice/crio-4b322fa705a8d02fc06850d115715013f443865945518a164be3b90190781877 WatchSource:0}: Error finding container 4b322fa705a8d02fc06850d115715013f443865945518a164be3b90190781877: Status 404 returned error can't find the container with id 4b322fa705a8d02fc06850d115715013f443865945518a164be3b90190781877 Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.812451 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:07 crc kubenswrapper[4793]: W0217 20:29:07.854049 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod152dd5ef_8cba_4731_9eec_8803f3785679.slice/crio-5515c98827e5d1203b3e181de7011a43b93a34b962cb5bb6ecae151119f5d8ac WatchSource:0}: Error finding container 5515c98827e5d1203b3e181de7011a43b93a34b962cb5bb6ecae151119f5d8ac: Status 404 returned error can't find the container with id 5515c98827e5d1203b3e181de7011a43b93a34b962cb5bb6ecae151119f5d8ac Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.856798 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7wxh2"] Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.858270 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.862182 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.862393 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.877374 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.898866 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68fc58f487-92zf8"] Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.908467 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7wxh2"] Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.976325 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8pb\" (UniqueName: \"kubernetes.io/projected/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-kube-api-access-dl8pb\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.976393 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-config-data\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.976486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-scripts\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:07 crc kubenswrapper[4793]: I0217 20:29:07.976570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.078710 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.078857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8pb\" (UniqueName: \"kubernetes.io/projected/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-kube-api-access-dl8pb\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.078893 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-config-data\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.078929 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-scripts\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.083276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.083577 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-scripts\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.084099 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-config-data\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.094404 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8pb\" (UniqueName: \"kubernetes.io/projected/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-kube-api-access-dl8pb\") pod \"nova-cell1-conductor-db-sync-7wxh2\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.209804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ba22f5c-6d29-4c7d-b560-79a98ca81234","Type":"ContainerStarted","Data":"4b322fa705a8d02fc06850d115715013f443865945518a164be3b90190781877"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.211588 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz7ss" event={"ID":"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7","Type":"ContainerStarted","Data":"f0e98817664600a87bf052a7a181e0a01e8bb3ff6b51578e14672719efedc416"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.217959 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerID="716312813c7a3a9bad9f6b90a61ec138b603d8ae4ad234f9204ad2991d62c17f" exitCode=0 Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.218021 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" event={"ID":"9ff06b45-da92-4c62-a974-7a51d30a16ed","Type":"ContainerDied","Data":"716312813c7a3a9bad9f6b90a61ec138b603d8ae4ad234f9204ad2991d62c17f"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.218046 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" event={"ID":"9ff06b45-da92-4c62-a974-7a51d30a16ed","Type":"ContainerStarted","Data":"5f931c2d1791fedaea3ee1cdad57f8256b58023a418fbb529d74dd430540769f"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.221744 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"152dd5ef-8cba-4731-9eec-8803f3785679","Type":"ContainerStarted","Data":"5515c98827e5d1203b3e181de7011a43b93a34b962cb5bb6ecae151119f5d8ac"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.229197 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mz7ss" podStartSLOduration=3.229184797 podStartE2EDuration="3.229184797s" podCreationTimestamp="2026-02-17 20:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:08.224291995 +0000 UTC m=+1223.515990306" watchObservedRunningTime="2026-02-17 20:29:08.229184797 +0000 UTC m=+1223.520883098" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.232209 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.249026 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e2bb214-3f4a-45a9-bdc5-01d75750d494","Type":"ContainerStarted","Data":"d74fd59d2babd09cf2df34cc73696ce94aac4745ef65208f7e4126861dd4c97f"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.254017 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fa127ec-e778-4eb1-b65d-477e849110ca","Type":"ContainerStarted","Data":"5b46abb1d3b6fd597c441a640270bb9c55ad07f1c81b52b01ada780456eb5f7f"} Feb 17 20:29:08 crc kubenswrapper[4793]: I0217 20:29:08.892201 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7wxh2"] Feb 17 20:29:09 crc kubenswrapper[4793]: I0217 20:29:09.269739 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" event={"ID":"9ff06b45-da92-4c62-a974-7a51d30a16ed","Type":"ContainerStarted","Data":"614feb6a377835fbafd17aef4eb9562fad0dea7793dced49c92e124b37cd0d7f"} Feb 17 20:29:09 crc kubenswrapper[4793]: I0217 20:29:09.270100 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:09 crc kubenswrapper[4793]: I0217 20:29:09.301975 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" podStartSLOduration=3.301958635 podStartE2EDuration="3.301958635s" podCreationTimestamp="2026-02-17 20:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:09.301880013 +0000 UTC m=+1224.593578334" watchObservedRunningTime="2026-02-17 20:29:09.301958635 +0000 UTC m=+1224.593656946" Feb 17 20:29:09 crc kubenswrapper[4793]: I0217 20:29:09.904872 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:09 crc kubenswrapper[4793]: I0217 20:29:09.928061 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:10 crc kubenswrapper[4793]: I0217 20:29:10.303716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" event={"ID":"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b","Type":"ContainerStarted","Data":"77a6974c35fa53fbc01138f30d44d8cea5ad9769a020874bcd4d4d85e909989a"} Feb 17 20:29:11 crc kubenswrapper[4793]: I0217 20:29:11.965801 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:29:11 crc kubenswrapper[4793]: I0217 20:29:11.966912 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:11 crc kubenswrapper[4793]: E0217 20:29:11.967242 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:11 crc kubenswrapper[4793]: I0217 20:29:11.967465 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:29:11 crc kubenswrapper[4793]: I0217 20:29:11.967492 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.322115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"152dd5ef-8cba-4731-9eec-8803f3785679","Type":"ContainerStarted","Data":"f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.323831 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e2bb214-3f4a-45a9-bdc5-01d75750d494","Type":"ContainerStarted","Data":"bbea6c983542a633f05b4da44d62ff2358fd2033212474a5abb090bb6f305449"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.323955 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6e2bb214-3f4a-45a9-bdc5-01d75750d494" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bbea6c983542a633f05b4da44d62ff2358fd2033212474a5abb090bb6f305449" gracePeriod=30 Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.326251 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fa127ec-e778-4eb1-b65d-477e849110ca","Type":"ContainerStarted","Data":"4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.326274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fa127ec-e778-4eb1-b65d-477e849110ca","Type":"ContainerStarted","Data":"b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.332157 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ba22f5c-6d29-4c7d-b560-79a98ca81234","Type":"ContainerStarted","Data":"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.332181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ba22f5c-6d29-4c7d-b560-79a98ca81234","Type":"ContainerStarted","Data":"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.332262 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-log" containerID="cri-o://974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670" gracePeriod=30 Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.332337 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-metadata" containerID="cri-o://a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200" gracePeriod=30 Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.343305 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.026855898 podStartE2EDuration="6.343291539s" podCreationTimestamp="2026-02-17 20:29:06 +0000 UTC" firstStartedPulling="2026-02-17 20:29:07.885779673 +0000 UTC m=+1223.177477984" lastFinishedPulling="2026-02-17 20:29:11.202215294 +0000 UTC m=+1226.493913625" observedRunningTime="2026-02-17 20:29:12.342191742 +0000 UTC m=+1227.633890053" watchObservedRunningTime="2026-02-17 20:29:12.343291539 +0000 UTC m=+1227.634989850" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.343293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" event={"ID":"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b","Type":"ContainerStarted","Data":"a87fbadb1f85d2f9d8404a012dbbaee7330bcf614fcd8bc0839bd9e112aab960"} Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.343626 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:12 crc kubenswrapper[4793]: E0217 20:29:12.344068 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.370234 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.999046387 podStartE2EDuration="6.370215828s" podCreationTimestamp="2026-02-17 20:29:06 +0000 UTC" firstStartedPulling="2026-02-17 20:29:07.83092718 +0000 UTC m=+1223.122625491" lastFinishedPulling="2026-02-17 20:29:11.202096611 +0000 UTC m=+1226.493794932" observedRunningTime="2026-02-17 20:29:12.363623294 +0000 UTC m=+1227.655321605" watchObservedRunningTime="2026-02-17 20:29:12.370215828 +0000 UTC m=+1227.661914139" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.389281 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.611997189 podStartE2EDuration="6.389271332s" podCreationTimestamp="2026-02-17 20:29:06 +0000 UTC" firstStartedPulling="2026-02-17 20:29:07.462388032 +0000 UTC m=+1222.754086343" lastFinishedPulling="2026-02-17 20:29:11.239662185 +0000 UTC m=+1226.531360486" observedRunningTime="2026-02-17 20:29:12.388063352 +0000 UTC m=+1227.679761663" watchObservedRunningTime="2026-02-17 20:29:12.389271332 +0000 UTC m=+1227.680969643" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.409536 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.420377349 podStartE2EDuration="6.409517525s" podCreationTimestamp="2026-02-17 20:29:06 +0000 UTC" firstStartedPulling="2026-02-17 20:29:07.201371477 +0000 UTC m=+1222.493069788" lastFinishedPulling="2026-02-17 20:29:11.190511653 +0000 UTC m=+1226.482209964" observedRunningTime="2026-02-17 20:29:12.402638914 +0000 UTC m=+1227.694337245" watchObservedRunningTime="2026-02-17 20:29:12.409517525 +0000 UTC m=+1227.701215836" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.981176 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:12 crc kubenswrapper[4793]: I0217 20:29:12.999875 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" podStartSLOduration=5.999852764 podStartE2EDuration="5.999852764s" podCreationTimestamp="2026-02-17 20:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:12.423066472 +0000 UTC m=+1227.714764803" watchObservedRunningTime="2026-02-17 20:29:12.999852764 +0000 UTC m=+1228.291551075" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.128516 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba22f5c-6d29-4c7d-b560-79a98ca81234-logs\") pod \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.128739 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-config-data\") pod \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.128798 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6dgk\" (UniqueName: \"kubernetes.io/projected/4ba22f5c-6d29-4c7d-b560-79a98ca81234-kube-api-access-r6dgk\") pod \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.128862 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-combined-ca-bundle\") pod \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\" (UID: \"4ba22f5c-6d29-4c7d-b560-79a98ca81234\") " Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.130305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba22f5c-6d29-4c7d-b560-79a98ca81234-logs" (OuterVolumeSpecName: "logs") pod "4ba22f5c-6d29-4c7d-b560-79a98ca81234" (UID: "4ba22f5c-6d29-4c7d-b560-79a98ca81234"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.137986 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba22f5c-6d29-4c7d-b560-79a98ca81234-kube-api-access-r6dgk" (OuterVolumeSpecName: "kube-api-access-r6dgk") pod "4ba22f5c-6d29-4c7d-b560-79a98ca81234" (UID: "4ba22f5c-6d29-4c7d-b560-79a98ca81234"). InnerVolumeSpecName "kube-api-access-r6dgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.157723 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-config-data" (OuterVolumeSpecName: "config-data") pod "4ba22f5c-6d29-4c7d-b560-79a98ca81234" (UID: "4ba22f5c-6d29-4c7d-b560-79a98ca81234"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.171781 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ba22f5c-6d29-4c7d-b560-79a98ca81234" (UID: "4ba22f5c-6d29-4c7d-b560-79a98ca81234"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.232128 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6dgk\" (UniqueName: \"kubernetes.io/projected/4ba22f5c-6d29-4c7d-b560-79a98ca81234-kube-api-access-r6dgk\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.232288 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.232303 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba22f5c-6d29-4c7d-b560-79a98ca81234-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.232316 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba22f5c-6d29-4c7d-b560-79a98ca81234-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.355389 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.355447 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ba22f5c-6d29-4c7d-b560-79a98ca81234","Type":"ContainerDied","Data":"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200"} Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.355879 4793 scope.go:117] "RemoveContainer" containerID="a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.355289 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerID="a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200" exitCode=0 Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.376420 4793 generic.go:334] "Generic (PLEG): container finished" podID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerID="974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670" exitCode=143 Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.377785 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ba22f5c-6d29-4c7d-b560-79a98ca81234","Type":"ContainerDied","Data":"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670"} Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.377814 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ba22f5c-6d29-4c7d-b560-79a98ca81234","Type":"ContainerDied","Data":"4b322fa705a8d02fc06850d115715013f443865945518a164be3b90190781877"} Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.384899 4793 scope.go:117] "RemoveContainer" containerID="974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.403213 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.413959 4793 scope.go:117] "RemoveContainer" containerID="a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200" Feb 17 20:29:13 crc kubenswrapper[4793]: E0217 20:29:13.414474 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200\": container with ID starting with a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200 not found: ID does not exist" containerID="a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.414517 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200"} err="failed to get container status \"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200\": rpc error: code = NotFound desc = could not find container \"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200\": container with ID starting with a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200 not found: ID does not exist" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.414542 4793 scope.go:117] "RemoveContainer" containerID="974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670" Feb 17 20:29:13 crc kubenswrapper[4793]: E0217 20:29:13.414851 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670\": container with ID starting with 974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670 not found: ID does not exist" containerID="974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.414880 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670"} err="failed to get container status \"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670\": rpc error: code = NotFound desc = could not find container \"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670\": container with ID starting with 974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670 not found: ID does not exist" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.414900 4793 scope.go:117] "RemoveContainer" containerID="a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.415457 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200"} err="failed to get container status \"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200\": rpc error: code = NotFound desc = could not find container \"a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200\": container with ID starting with a9c0c6110ca228611a8a12abd517b0cd7f7d3b9ef76d738e89ac06e2d249e200 not found: ID does not exist" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.415485 4793 scope.go:117] "RemoveContainer" containerID="974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.415749 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670"} err="failed to get container status \"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670\": rpc error: code = NotFound desc = could not find container \"974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670\": container with ID starting with 974b842645620d9ba55040249040c9d245a926f35ee8c704d37240ccad36b670 not found: ID does not exist" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.423795 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.442819 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:13 crc kubenswrapper[4793]: E0217 20:29:13.443393 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-log" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.443420 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-log" Feb 17 20:29:13 crc kubenswrapper[4793]: E0217 20:29:13.443433 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-metadata" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.443443 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-metadata" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.443775 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-metadata" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.443840 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" containerName="nova-metadata-log" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.445221 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.450423 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.450881 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.477017 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.548678 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpd7\" (UniqueName: \"kubernetes.io/projected/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-kube-api-access-vwpd7\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.548826 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-config-data\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.549050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.549163 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-logs\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.549197 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.554112 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba22f5c-6d29-4c7d-b560-79a98ca81234" path="/var/lib/kubelet/pods/4ba22f5c-6d29-4c7d-b560-79a98ca81234/volumes" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.650903 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-logs\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.650958 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.650995 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwpd7\" (UniqueName: \"kubernetes.io/projected/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-kube-api-access-vwpd7\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.651104 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-config-data\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.651260 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.653067 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-logs\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.657429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.657538 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-config-data\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.659121 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.672352 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwpd7\" (UniqueName: \"kubernetes.io/projected/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-kube-api-access-vwpd7\") pod \"nova-metadata-0\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " pod="openstack/nova-metadata-0" Feb 17 20:29:13 crc kubenswrapper[4793]: I0217 20:29:13.765431 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:14 crc kubenswrapper[4793]: I0217 20:29:14.303819 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:14 crc kubenswrapper[4793]: W0217 20:29:14.303883 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9caf054_8f56_4cd9_9ebc_84f41ce4beae.slice/crio-fabaa91e9029e26f7be74d30b99b7e89d4186c72fec05122f68d7dba862c9ec0 WatchSource:0}: Error finding container fabaa91e9029e26f7be74d30b99b7e89d4186c72fec05122f68d7dba862c9ec0: Status 404 returned error can't find the container with id fabaa91e9029e26f7be74d30b99b7e89d4186c72fec05122f68d7dba862c9ec0 Feb 17 20:29:14 crc kubenswrapper[4793]: I0217 20:29:14.396017 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9caf054-8f56-4cd9-9ebc-84f41ce4beae","Type":"ContainerStarted","Data":"fabaa91e9029e26f7be74d30b99b7e89d4186c72fec05122f68d7dba862c9ec0"} Feb 17 20:29:15 crc kubenswrapper[4793]: I0217 20:29:15.408875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9caf054-8f56-4cd9-9ebc-84f41ce4beae","Type":"ContainerStarted","Data":"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5"} Feb 17 20:29:15 crc kubenswrapper[4793]: I0217 20:29:15.409312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9caf054-8f56-4cd9-9ebc-84f41ce4beae","Type":"ContainerStarted","Data":"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4"} Feb 17 20:29:15 crc kubenswrapper[4793]: I0217 20:29:15.434314 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.434302918 podStartE2EDuration="2.434302918s" podCreationTimestamp="2026-02-17 20:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:15.42836422 +0000 UTC m=+1230.720062531" watchObservedRunningTime="2026-02-17 20:29:15.434302918 +0000 UTC m=+1230.726001229" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.483210 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.839057 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.839135 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.876511 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.876645 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.911885 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 20:29:16 crc kubenswrapper[4793]: I0217 20:29:16.956499 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.040328 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574778f449-sqg2x"] Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.040546 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-574778f449-sqg2x" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerName="dnsmasq-dns" containerID="cri-o://3c42d3e549ccd83488531b1e37faff192c9d17b62d69db2b3e7961d00b232379" gracePeriod=10 Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.443185 4793 generic.go:334] "Generic (PLEG): container finished" podID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerID="3c42d3e549ccd83488531b1e37faff192c9d17b62d69db2b3e7961d00b232379" exitCode=0 Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.443274 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574778f449-sqg2x" event={"ID":"604ca62b-98a7-4023-b6fc-de75724d84a9","Type":"ContainerDied","Data":"3c42d3e549ccd83488531b1e37faff192c9d17b62d69db2b3e7961d00b232379"} Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.447417 4793 generic.go:334] "Generic (PLEG): container finished" podID="6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" containerID="f0e98817664600a87bf052a7a181e0a01e8bb3ff6b51578e14672719efedc416" exitCode=0 Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.448503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz7ss" event={"ID":"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7","Type":"ContainerDied","Data":"f0e98817664600a87bf052a7a181e0a01e8bb3ff6b51578e14672719efedc416"} Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.499079 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.599439 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.744031 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-svc\") pod \"604ca62b-98a7-4023-b6fc-de75724d84a9\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.744123 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-config\") pod \"604ca62b-98a7-4023-b6fc-de75724d84a9\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.744160 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-nb\") pod \"604ca62b-98a7-4023-b6fc-de75724d84a9\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.744195 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-swift-storage-0\") pod \"604ca62b-98a7-4023-b6fc-de75724d84a9\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.744254 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwmlk\" (UniqueName: \"kubernetes.io/projected/604ca62b-98a7-4023-b6fc-de75724d84a9-kube-api-access-pwmlk\") pod \"604ca62b-98a7-4023-b6fc-de75724d84a9\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.744273 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-sb\") pod \"604ca62b-98a7-4023-b6fc-de75724d84a9\" (UID: \"604ca62b-98a7-4023-b6fc-de75724d84a9\") " Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.767102 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604ca62b-98a7-4023-b6fc-de75724d84a9-kube-api-access-pwmlk" (OuterVolumeSpecName: "kube-api-access-pwmlk") pod "604ca62b-98a7-4023-b6fc-de75724d84a9" (UID: "604ca62b-98a7-4023-b6fc-de75724d84a9"). InnerVolumeSpecName "kube-api-access-pwmlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.812385 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "604ca62b-98a7-4023-b6fc-de75724d84a9" (UID: "604ca62b-98a7-4023-b6fc-de75724d84a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.815068 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "604ca62b-98a7-4023-b6fc-de75724d84a9" (UID: "604ca62b-98a7-4023-b6fc-de75724d84a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.870935 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwmlk\" (UniqueName: \"kubernetes.io/projected/604ca62b-98a7-4023-b6fc-de75724d84a9-kube-api-access-pwmlk\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.870966 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.870975 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.902177 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-config" (OuterVolumeSpecName: "config") pod "604ca62b-98a7-4023-b6fc-de75724d84a9" (UID: "604ca62b-98a7-4023-b6fc-de75724d84a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.919248 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "604ca62b-98a7-4023-b6fc-de75724d84a9" (UID: "604ca62b-98a7-4023-b6fc-de75724d84a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.926122 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.926433 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.928109 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "604ca62b-98a7-4023-b6fc-de75724d84a9" (UID: "604ca62b-98a7-4023-b6fc-de75724d84a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.974073 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.974333 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:17 crc kubenswrapper[4793]: I0217 20:29:17.974442 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/604ca62b-98a7-4023-b6fc-de75724d84a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.457852 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574778f449-sqg2x" event={"ID":"604ca62b-98a7-4023-b6fc-de75724d84a9","Type":"ContainerDied","Data":"22be82bc414950b755dc32e2b39df7e05196aae63b498fe7acea0d9d73c6637e"} Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.457904 4793 scope.go:117] "RemoveContainer" containerID="3c42d3e549ccd83488531b1e37faff192c9d17b62d69db2b3e7961d00b232379" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.458192 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574778f449-sqg2x" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.488767 4793 scope.go:117] "RemoveContainer" containerID="7f09835dd2cf6679726eb3d72b0180e5308fc3a10ebfba5a8337fb2c49213861" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.495761 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574778f449-sqg2x"] Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.503872 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574778f449-sqg2x"] Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.765746 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.766087 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.872735 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.996798 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-combined-ca-bundle\") pod \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.997074 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-config-data\") pod \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.997214 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-scripts\") pod \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " Feb 17 20:29:18 crc kubenswrapper[4793]: I0217 20:29:18.997350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v22g4\" (UniqueName: \"kubernetes.io/projected/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-kube-api-access-v22g4\") pod \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\" (UID: \"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7\") " Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.009041 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-scripts" (OuterVolumeSpecName: "scripts") pod "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" (UID: "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.009410 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-kube-api-access-v22g4" (OuterVolumeSpecName: "kube-api-access-v22g4") pod "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" (UID: "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7"). InnerVolumeSpecName "kube-api-access-v22g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.026897 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-config-data" (OuterVolumeSpecName: "config-data") pod "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" (UID: "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.090130 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" (UID: "6f3ef9a1-5f91-4493-8070-1b6125e6f1b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.101841 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.101867 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.101875 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.101884 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v22g4\" (UniqueName: \"kubernetes.io/projected/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7-kube-api-access-v22g4\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.485850 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mz7ss" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.486344 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mz7ss" event={"ID":"6f3ef9a1-5f91-4493-8070-1b6125e6f1b7","Type":"ContainerDied","Data":"34d13ce25b4b0f124f85aa9888040de8403771e4741010490dfb0051cc13a6a9"} Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.486417 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d13ce25b4b0f124f85aa9888040de8403771e4741010490dfb0051cc13a6a9" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.554975 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" path="/var/lib/kubelet/pods/604ca62b-98a7-4023-b6fc-de75724d84a9/volumes" Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.636991 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.637592 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-log" containerID="cri-o://b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296" gracePeriod=30 Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.637843 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-api" containerID="cri-o://4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc" gracePeriod=30 Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.648761 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.667141 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.667352 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-log" containerID="cri-o://e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4" gracePeriod=30 Feb 17 20:29:19 crc kubenswrapper[4793]: I0217 20:29:19.667494 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-metadata" containerID="cri-o://0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5" gracePeriod=30 Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.102235 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.102546 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.306146 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.427488 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-config-data\") pod \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.427574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-combined-ca-bundle\") pod \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.427643 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-logs\") pod \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.427796 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-nova-metadata-tls-certs\") pod \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.427843 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwpd7\" (UniqueName: \"kubernetes.io/projected/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-kube-api-access-vwpd7\") pod \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\" (UID: \"d9caf054-8f56-4cd9-9ebc-84f41ce4beae\") " Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.428516 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-logs" (OuterVolumeSpecName: "logs") pod "d9caf054-8f56-4cd9-9ebc-84f41ce4beae" (UID: "d9caf054-8f56-4cd9-9ebc-84f41ce4beae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.448147 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-kube-api-access-vwpd7" (OuterVolumeSpecName: "kube-api-access-vwpd7") pod "d9caf054-8f56-4cd9-9ebc-84f41ce4beae" (UID: "d9caf054-8f56-4cd9-9ebc-84f41ce4beae"). InnerVolumeSpecName "kube-api-access-vwpd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.465173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-config-data" (OuterVolumeSpecName: "config-data") pod "d9caf054-8f56-4cd9-9ebc-84f41ce4beae" (UID: "d9caf054-8f56-4cd9-9ebc-84f41ce4beae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.495298 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9caf054-8f56-4cd9-9ebc-84f41ce4beae" (UID: "d9caf054-8f56-4cd9-9ebc-84f41ce4beae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509504 4793 generic.go:334] "Generic (PLEG): container finished" podID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerID="0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5" exitCode=0 Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509543 4793 generic.go:334] "Generic (PLEG): container finished" podID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerID="e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4" exitCode=143 Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509628 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9caf054-8f56-4cd9-9ebc-84f41ce4beae","Type":"ContainerDied","Data":"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5"} Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509663 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9caf054-8f56-4cd9-9ebc-84f41ce4beae","Type":"ContainerDied","Data":"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4"} Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509677 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9caf054-8f56-4cd9-9ebc-84f41ce4beae","Type":"ContainerDied","Data":"fabaa91e9029e26f7be74d30b99b7e89d4186c72fec05122f68d7dba862c9ec0"} Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509720 4793 scope.go:117] "RemoveContainer" containerID="0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.509860 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.529655 4793 generic.go:334] "Generic (PLEG): container finished" podID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerID="b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296" exitCode=143 Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.529889 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="152dd5ef-8cba-4731-9eec-8803f3785679" containerName="nova-scheduler-scheduler" containerID="cri-o://f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" gracePeriod=30 Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.530275 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fa127ec-e778-4eb1-b65d-477e849110ca","Type":"ContainerDied","Data":"b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296"} Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.530740 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.530771 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwpd7\" (UniqueName: \"kubernetes.io/projected/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-kube-api-access-vwpd7\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.530786 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.530799 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.540820 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d9caf054-8f56-4cd9-9ebc-84f41ce4beae" (UID: "d9caf054-8f56-4cd9-9ebc-84f41ce4beae"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.632541 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9caf054-8f56-4cd9-9ebc-84f41ce4beae-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.634609 4793 scope.go:117] "RemoveContainer" containerID="e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.658878 4793 scope.go:117] "RemoveContainer" containerID="0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5" Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.659281 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5\": container with ID starting with 0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5 not found: ID does not exist" containerID="0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.659316 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5"} err="failed to get container status \"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5\": rpc error: code = NotFound desc = could not find container \"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5\": container with ID starting with 0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5 not found: ID does not exist" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.659336 4793 scope.go:117] "RemoveContainer" containerID="e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4" Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.659708 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4\": container with ID starting with e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4 not found: ID does not exist" containerID="e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.659732 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4"} err="failed to get container status \"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4\": rpc error: code = NotFound desc = could not find container \"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4\": container with ID starting with e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4 not found: ID does not exist" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.659745 4793 scope.go:117] "RemoveContainer" containerID="0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.659967 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5"} err="failed to get container status \"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5\": rpc error: code = NotFound desc = could not find container \"0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5\": container with ID starting with 0469c0e88625cc2866228cd24af0da41617f328f2925fd684c70ac79d707f9c5 not found: ID does not exist" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.659986 4793 scope.go:117] "RemoveContainer" containerID="e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.660201 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4"} err="failed to get container status \"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4\": rpc error: code = NotFound desc = could not find container \"e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4\": container with ID starting with e405491dfa2e060830f5907efb5bfec6e0534f9475ae086e7b70a348575edfa4 not found: ID does not exist" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.845429 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.859526 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.871552 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.872061 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-metadata" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872090 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-metadata" Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.872107 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerName="dnsmasq-dns" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872116 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerName="dnsmasq-dns" Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.872148 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerName="init" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872156 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerName="init" Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.872181 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-log" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872189 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-log" Feb 17 20:29:20 crc kubenswrapper[4793]: E0217 20:29:20.872209 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" containerName="nova-manage" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872216 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" containerName="nova-manage" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872433 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-metadata" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872467 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="604ca62b-98a7-4023-b6fc-de75724d84a9" containerName="dnsmasq-dns" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872501 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" containerName="nova-metadata-log" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.872516 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" containerName="nova-manage" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.873867 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.876372 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.876605 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 20:29:20 crc kubenswrapper[4793]: I0217 20:29:20.883716 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.039939 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.040054 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e9f812f-0a83-48bb-9048-97706114af57-logs\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.040115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-config-data\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.040138 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.040162 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptcq\" (UniqueName: \"kubernetes.io/projected/0e9f812f-0a83-48bb-9048-97706114af57-kube-api-access-bptcq\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.141652 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e9f812f-0a83-48bb-9048-97706114af57-logs\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.142016 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-config-data\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.142036 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.142059 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptcq\" (UniqueName: \"kubernetes.io/projected/0e9f812f-0a83-48bb-9048-97706114af57-kube-api-access-bptcq\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.142170 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.142651 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e9f812f-0a83-48bb-9048-97706114af57-logs\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.145766 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.145977 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.147483 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-config-data\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.159229 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptcq\" (UniqueName: \"kubernetes.io/projected/0e9f812f-0a83-48bb-9048-97706114af57-kube-api-access-bptcq\") pod \"nova-metadata-0\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.218853 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.551945 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9caf054-8f56-4cd9-9ebc-84f41ce4beae" path="/var/lib/kubelet/pods/d9caf054-8f56-4cd9-9ebc-84f41ce4beae/volumes" Feb 17 20:29:21 crc kubenswrapper[4793]: I0217 20:29:21.670626 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:29:21 crc kubenswrapper[4793]: E0217 20:29:21.878738 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 20:29:21 crc kubenswrapper[4793]: E0217 20:29:21.879990 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 20:29:21 crc kubenswrapper[4793]: E0217 20:29:21.881103 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 20:29:21 crc kubenswrapper[4793]: E0217 20:29:21.881142 4793 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="152dd5ef-8cba-4731-9eec-8803f3785679" containerName="nova-scheduler-scheduler" Feb 17 20:29:22 crc kubenswrapper[4793]: I0217 20:29:22.554120 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e9f812f-0a83-48bb-9048-97706114af57","Type":"ContainerStarted","Data":"18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e"} Feb 17 20:29:22 crc kubenswrapper[4793]: I0217 20:29:22.554160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e9f812f-0a83-48bb-9048-97706114af57","Type":"ContainerStarted","Data":"4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807"} Feb 17 20:29:22 crc kubenswrapper[4793]: I0217 20:29:22.554171 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e9f812f-0a83-48bb-9048-97706114af57","Type":"ContainerStarted","Data":"0bb71534e116159173514520818d8e1a3afb74d2928238ddbffbd3e0f20d62fa"} Feb 17 20:29:22 crc kubenswrapper[4793]: I0217 20:29:22.555715 4793 generic.go:334] "Generic (PLEG): container finished" podID="f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" containerID="a87fbadb1f85d2f9d8404a012dbbaee7330bcf614fcd8bc0839bd9e112aab960" exitCode=0 Feb 17 20:29:22 crc kubenswrapper[4793]: I0217 20:29:22.555760 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" event={"ID":"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b","Type":"ContainerDied","Data":"a87fbadb1f85d2f9d8404a012dbbaee7330bcf614fcd8bc0839bd9e112aab960"} Feb 17 20:29:22 crc kubenswrapper[4793]: I0217 20:29:22.578191 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.578169396 podStartE2EDuration="2.578169396s" podCreationTimestamp="2026-02-17 20:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:22.573044799 +0000 UTC m=+1237.864743110" watchObservedRunningTime="2026-02-17 20:29:22.578169396 +0000 UTC m=+1237.869867707" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.228439 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.381599 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fa127ec-e778-4eb1-b65d-477e849110ca-logs\") pod \"5fa127ec-e778-4eb1-b65d-477e849110ca\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.381729 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wl94\" (UniqueName: \"kubernetes.io/projected/5fa127ec-e778-4eb1-b65d-477e849110ca-kube-api-access-9wl94\") pod \"5fa127ec-e778-4eb1-b65d-477e849110ca\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.381934 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-combined-ca-bundle\") pod \"5fa127ec-e778-4eb1-b65d-477e849110ca\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.381978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data\") pod \"5fa127ec-e778-4eb1-b65d-477e849110ca\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.382205 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fa127ec-e778-4eb1-b65d-477e849110ca-logs" (OuterVolumeSpecName: "logs") pod "5fa127ec-e778-4eb1-b65d-477e849110ca" (UID: "5fa127ec-e778-4eb1-b65d-477e849110ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.382922 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fa127ec-e778-4eb1-b65d-477e849110ca-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.391378 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa127ec-e778-4eb1-b65d-477e849110ca-kube-api-access-9wl94" (OuterVolumeSpecName: "kube-api-access-9wl94") pod "5fa127ec-e778-4eb1-b65d-477e849110ca" (UID: "5fa127ec-e778-4eb1-b65d-477e849110ca"). InnerVolumeSpecName "kube-api-access-9wl94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:23 crc kubenswrapper[4793]: E0217 20:29:23.408011 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data podName:5fa127ec-e778-4eb1-b65d-477e849110ca nodeName:}" failed. No retries permitted until 2026-02-17 20:29:23.907973758 +0000 UTC m=+1239.199672109 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data") pod "5fa127ec-e778-4eb1-b65d-477e849110ca" (UID: "5fa127ec-e778-4eb1-b65d-477e849110ca") : error deleting /var/lib/kubelet/pods/5fa127ec-e778-4eb1-b65d-477e849110ca/volume-subpaths: remove /var/lib/kubelet/pods/5fa127ec-e778-4eb1-b65d-477e849110ca/volume-subpaths: no such file or directory Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.412191 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fa127ec-e778-4eb1-b65d-477e849110ca" (UID: "5fa127ec-e778-4eb1-b65d-477e849110ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.497164 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.497196 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wl94\" (UniqueName: \"kubernetes.io/projected/5fa127ec-e778-4eb1-b65d-477e849110ca-kube-api-access-9wl94\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.575537 4793 generic.go:334] "Generic (PLEG): container finished" podID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerID="4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc" exitCode=0 Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.575742 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.576380 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fa127ec-e778-4eb1-b65d-477e849110ca","Type":"ContainerDied","Data":"4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc"} Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.576416 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5fa127ec-e778-4eb1-b65d-477e849110ca","Type":"ContainerDied","Data":"5b46abb1d3b6fd597c441a640270bb9c55ad07f1c81b52b01ada780456eb5f7f"} Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.576455 4793 scope.go:117] "RemoveContainer" containerID="4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.608722 4793 scope.go:117] "RemoveContainer" containerID="b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.635517 4793 scope.go:117] "RemoveContainer" containerID="4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc" Feb 17 20:29:23 crc kubenswrapper[4793]: E0217 20:29:23.636013 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc\": container with ID starting with 4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc not found: ID does not exist" containerID="4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.636040 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc"} err="failed to get container status \"4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc\": rpc error: code = NotFound desc = could not find container \"4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc\": container with ID starting with 4b7b5c521a33c0f1925fb6096aad8318ddf2a87fc5bcf9474791a77e9a2e3ccc not found: ID does not exist" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.636063 4793 scope.go:117] "RemoveContainer" containerID="b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296" Feb 17 20:29:23 crc kubenswrapper[4793]: E0217 20:29:23.636524 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296\": container with ID starting with b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296 not found: ID does not exist" containerID="b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.636552 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296"} err="failed to get container status \"b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296\": rpc error: code = NotFound desc = could not find container \"b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296\": container with ID starting with b553f8bf43a95d33aea88056adf0dffebf9b3c2c4babd900ffdd0b49b7f0a296 not found: ID does not exist" Feb 17 20:29:23 crc kubenswrapper[4793]: I0217 20:29:23.940465 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.007864 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data\") pod \"5fa127ec-e778-4eb1-b65d-477e849110ca\" (UID: \"5fa127ec-e778-4eb1-b65d-477e849110ca\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.012679 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data" (OuterVolumeSpecName: "config-data") pod "5fa127ec-e778-4eb1-b65d-477e849110ca" (UID: "5fa127ec-e778-4eb1-b65d-477e849110ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.077724 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.109460 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl8pb\" (UniqueName: \"kubernetes.io/projected/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-kube-api-access-dl8pb\") pod \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.109533 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-scripts\") pod \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.109622 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-config-data\") pod \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.109697 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-combined-ca-bundle\") pod \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\" (UID: \"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.113450 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa127ec-e778-4eb1-b65d-477e849110ca-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.117787 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-kube-api-access-dl8pb" (OuterVolumeSpecName: "kube-api-access-dl8pb") pod "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" (UID: "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b"). InnerVolumeSpecName "kube-api-access-dl8pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.120904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-scripts" (OuterVolumeSpecName: "scripts") pod "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" (UID: "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.149144 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" (UID: "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.152009 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-config-data" (OuterVolumeSpecName: "config-data") pod "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" (UID: "f26f2ccf-e459-4a2b-9cc8-c06d7165c94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.210647 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.214509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-combined-ca-bundle\") pod \"152dd5ef-8cba-4731-9eec-8803f3785679\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.214550 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-config-data\") pod \"152dd5ef-8cba-4731-9eec-8803f3785679\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.214608 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvck8\" (UniqueName: \"kubernetes.io/projected/152dd5ef-8cba-4731-9eec-8803f3785679-kube-api-access-wvck8\") pod \"152dd5ef-8cba-4731-9eec-8803f3785679\" (UID: \"152dd5ef-8cba-4731-9eec-8803f3785679\") " Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.215135 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl8pb\" (UniqueName: \"kubernetes.io/projected/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-kube-api-access-dl8pb\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.215155 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.215166 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.215174 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.220744 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.221284 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152dd5ef-8cba-4731-9eec-8803f3785679-kube-api-access-wvck8" (OuterVolumeSpecName: "kube-api-access-wvck8") pod "152dd5ef-8cba-4731-9eec-8803f3785679" (UID: "152dd5ef-8cba-4731-9eec-8803f3785679"). InnerVolumeSpecName "kube-api-access-wvck8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.239900 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: E0217 20:29:24.240291 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152dd5ef-8cba-4731-9eec-8803f3785679" containerName="nova-scheduler-scheduler" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240303 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="152dd5ef-8cba-4731-9eec-8803f3785679" containerName="nova-scheduler-scheduler" Feb 17 20:29:24 crc kubenswrapper[4793]: E0217 20:29:24.240323 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" containerName="nova-cell1-conductor-db-sync" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240331 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" containerName="nova-cell1-conductor-db-sync" Feb 17 20:29:24 crc kubenswrapper[4793]: E0217 20:29:24.240357 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-api" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240367 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-api" Feb 17 20:29:24 crc kubenswrapper[4793]: E0217 20:29:24.240404 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-log" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240411 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-log" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240584 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-api" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240593 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" containerName="nova-api-log" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240608 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" containerName="nova-cell1-conductor-db-sync" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.240620 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="152dd5ef-8cba-4731-9eec-8803f3785679" containerName="nova-scheduler-scheduler" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.241546 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "152dd5ef-8cba-4731-9eec-8803f3785679" (UID: "152dd5ef-8cba-4731-9eec-8803f3785679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.243410 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.248389 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.254337 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.264852 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-config-data" (OuterVolumeSpecName: "config-data") pod "152dd5ef-8cba-4731-9eec-8803f3785679" (UID: "152dd5ef-8cba-4731-9eec-8803f3785679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.317153 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.317380 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/152dd5ef-8cba-4731-9eec-8803f3785679-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.317389 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvck8\" (UniqueName: \"kubernetes.io/projected/152dd5ef-8cba-4731-9eec-8803f3785679-kube-api-access-wvck8\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.419179 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.419498 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59782ccb-ca76-4c65-9782-096a6122636d-logs\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.419745 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv42m\" (UniqueName: \"kubernetes.io/projected/59782ccb-ca76-4c65-9782-096a6122636d-kube-api-access-jv42m\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.419854 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-config-data\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.521652 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.521737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59782ccb-ca76-4c65-9782-096a6122636d-logs\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.521811 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv42m\" (UniqueName: \"kubernetes.io/projected/59782ccb-ca76-4c65-9782-096a6122636d-kube-api-access-jv42m\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.521848 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-config-data\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.522277 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59782ccb-ca76-4c65-9782-096a6122636d-logs\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.526986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.528574 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-config-data\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.539125 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:24 crc kubenswrapper[4793]: E0217 20:29:24.539420 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.542613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv42m\" (UniqueName: \"kubernetes.io/projected/59782ccb-ca76-4c65-9782-096a6122636d-kube-api-access-jv42m\") pod \"nova-api-0\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.570885 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.585810 4793 generic.go:334] "Generic (PLEG): container finished" podID="152dd5ef-8cba-4731-9eec-8803f3785679" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" exitCode=0 Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.585910 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"152dd5ef-8cba-4731-9eec-8803f3785679","Type":"ContainerDied","Data":"f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf"} Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.585958 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"152dd5ef-8cba-4731-9eec-8803f3785679","Type":"ContainerDied","Data":"5515c98827e5d1203b3e181de7011a43b93a34b962cb5bb6ecae151119f5d8ac"} Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.585979 4793 scope.go:117] "RemoveContainer" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.586107 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.595939 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" event={"ID":"f26f2ccf-e459-4a2b-9cc8-c06d7165c94b","Type":"ContainerDied","Data":"77a6974c35fa53fbc01138f30d44d8cea5ad9769a020874bcd4d4d85e909989a"} Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.595989 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a6974c35fa53fbc01138f30d44d8cea5ad9769a020874bcd4d4d85e909989a" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.596071 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7wxh2" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.676769 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.679765 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.681354 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.734466 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.748501 4793 scope.go:117] "RemoveContainer" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" Feb 17 20:29:24 crc kubenswrapper[4793]: E0217 20:29:24.751245 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf\": container with ID starting with f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf not found: ID does not exist" containerID="f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.751294 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf"} err="failed to get container status \"f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf\": rpc error: code = NotFound desc = could not find container \"f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf\": container with ID starting with f02035fed945a167eb0f066a6fdaa7db26c8b11e5a6ef08cd448ecb6868656bf not found: ID does not exist" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.784827 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.792268 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.812347 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.814126 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.820393 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.831340 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.833927 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrcj4\" (UniqueName: \"kubernetes.io/projected/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-kube-api-access-qrcj4\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.834189 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.834424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.872312 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.936060 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.936148 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.936205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrcj4\" (UniqueName: \"kubernetes.io/projected/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-kube-api-access-qrcj4\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.936229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pgl6\" (UniqueName: \"kubernetes.io/projected/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-kube-api-access-4pgl6\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.936249 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.936268 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-config-data\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.941894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.947562 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:24 crc kubenswrapper[4793]: I0217 20:29:24.954705 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrcj4\" (UniqueName: \"kubernetes.io/projected/d3d13ca2-941b-4abb-a49b-27fb90f34c5e-kube-api-access-qrcj4\") pod \"nova-cell1-conductor-0\" (UID: \"d3d13ca2-941b-4abb-a49b-27fb90f34c5e\") " pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.038245 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pgl6\" (UniqueName: \"kubernetes.io/projected/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-kube-api-access-4pgl6\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.038296 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.038327 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-config-data\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.048472 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.049187 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.050192 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-config-data\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.069862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pgl6\" (UniqueName: \"kubernetes.io/projected/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-kube-api-access-4pgl6\") pod \"nova-scheduler-0\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.150177 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.174132 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.552746 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152dd5ef-8cba-4731-9eec-8803f3785679" path="/var/lib/kubelet/pods/152dd5ef-8cba-4731-9eec-8803f3785679/volumes" Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.554777 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa127ec-e778-4eb1-b65d-477e849110ca" path="/var/lib/kubelet/pods/5fa127ec-e778-4eb1-b65d-477e849110ca/volumes" Feb 17 20:29:25 crc kubenswrapper[4793]: W0217 20:29:25.611318 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d13ca2_941b_4abb_a49b_27fb90f34c5e.slice/crio-6073611a36201f4c19b3264c88877426908be7962375aa1dd16b01889dc7d2eb WatchSource:0}: Error finding container 6073611a36201f4c19b3264c88877426908be7962375aa1dd16b01889dc7d2eb: Status 404 returned error can't find the container with id 6073611a36201f4c19b3264c88877426908be7962375aa1dd16b01889dc7d2eb Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.612049 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.614190 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59782ccb-ca76-4c65-9782-096a6122636d","Type":"ContainerStarted","Data":"4672a37616a48879f59fd01b775964f155e7d2c499f30ec69edde0d20918726c"} Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.614215 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59782ccb-ca76-4c65-9782-096a6122636d","Type":"ContainerStarted","Data":"fd980cbb7022f4c835596247301705668852aef591c9bc29bf53e1dbf6c17dd5"} Feb 17 20:29:25 crc kubenswrapper[4793]: I0217 20:29:25.734771 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:29:25 crc kubenswrapper[4793]: W0217 20:29:25.752567 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod454345ac_3d2b_41e1_bcb3_03c6d1e83e16.slice/crio-8c1622b507b3be5a23c8a13ed0cb37de57dcdcfe9d83537c4fcdcf518864755f WatchSource:0}: Error finding container 8c1622b507b3be5a23c8a13ed0cb37de57dcdcfe9d83537c4fcdcf518864755f: Status 404 returned error can't find the container with id 8c1622b507b3be5a23c8a13ed0cb37de57dcdcfe9d83537c4fcdcf518864755f Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.218993 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.219317 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.625940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"454345ac-3d2b-41e1-bcb3-03c6d1e83e16","Type":"ContainerStarted","Data":"42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff"} Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.625999 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"454345ac-3d2b-41e1-bcb3-03c6d1e83e16","Type":"ContainerStarted","Data":"8c1622b507b3be5a23c8a13ed0cb37de57dcdcfe9d83537c4fcdcf518864755f"} Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.628788 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3d13ca2-941b-4abb-a49b-27fb90f34c5e","Type":"ContainerStarted","Data":"5f591b78d36bcb675f28858f714cc3a3a5821caf53bdace18a984b43ebdc0521"} Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.628824 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3d13ca2-941b-4abb-a49b-27fb90f34c5e","Type":"ContainerStarted","Data":"6073611a36201f4c19b3264c88877426908be7962375aa1dd16b01889dc7d2eb"} Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.629423 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.632783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59782ccb-ca76-4c65-9782-096a6122636d","Type":"ContainerStarted","Data":"70d60013492c327021c6529644ea6e2b0cd551f38a8a1c0273c0f806843f39a4"} Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.650821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.650796711 podStartE2EDuration="2.650796711s" podCreationTimestamp="2026-02-17 20:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:26.640978778 +0000 UTC m=+1241.932677089" watchObservedRunningTime="2026-02-17 20:29:26.650796711 +0000 UTC m=+1241.942495032" Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.662995 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.662979273 podStartE2EDuration="2.662979273s" podCreationTimestamp="2026-02-17 20:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:26.654065502 +0000 UTC m=+1241.945763833" watchObservedRunningTime="2026-02-17 20:29:26.662979273 +0000 UTC m=+1241.954677584" Feb 17 20:29:26 crc kubenswrapper[4793]: I0217 20:29:26.678828 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.678808956 podStartE2EDuration="2.678808956s" podCreationTimestamp="2026-02-17 20:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:26.670970111 +0000 UTC m=+1241.962668422" watchObservedRunningTime="2026-02-17 20:29:26.678808956 +0000 UTC m=+1241.970507267" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.028640 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.028869 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="abe64ae2-15f9-402d-984d-ea8f94bd480f" containerName="kube-state-metrics" containerID="cri-o://b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5" gracePeriod=30 Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.613531 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.699017 4793 generic.go:334] "Generic (PLEG): container finished" podID="abe64ae2-15f9-402d-984d-ea8f94bd480f" containerID="b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5" exitCode=2 Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.699081 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abe64ae2-15f9-402d-984d-ea8f94bd480f","Type":"ContainerDied","Data":"b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5"} Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.699114 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"abe64ae2-15f9-402d-984d-ea8f94bd480f","Type":"ContainerDied","Data":"a28219df7c591e38f659e59cfdee6650cc33499d4442e0e3f8e9521dfa408e76"} Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.699139 4793 scope.go:117] "RemoveContainer" containerID="b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.699351 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.737193 4793 scope.go:117] "RemoveContainer" containerID="b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5" Feb 17 20:29:29 crc kubenswrapper[4793]: E0217 20:29:29.737628 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5\": container with ID starting with b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5 not found: ID does not exist" containerID="b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.737657 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5"} err="failed to get container status \"b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5\": rpc error: code = NotFound desc = could not find container \"b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5\": container with ID starting with b234972c617411246dff345746d75169b3198d916b5bc4dc4d4ea17efa86c4e5 not found: ID does not exist" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.753415 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49ksq\" (UniqueName: \"kubernetes.io/projected/abe64ae2-15f9-402d-984d-ea8f94bd480f-kube-api-access-49ksq\") pod \"abe64ae2-15f9-402d-984d-ea8f94bd480f\" (UID: \"abe64ae2-15f9-402d-984d-ea8f94bd480f\") " Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.759115 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe64ae2-15f9-402d-984d-ea8f94bd480f-kube-api-access-49ksq" (OuterVolumeSpecName: "kube-api-access-49ksq") pod "abe64ae2-15f9-402d-984d-ea8f94bd480f" (UID: "abe64ae2-15f9-402d-984d-ea8f94bd480f"). InnerVolumeSpecName "kube-api-access-49ksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:29 crc kubenswrapper[4793]: I0217 20:29:29.855244 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49ksq\" (UniqueName: \"kubernetes.io/projected/abe64ae2-15f9-402d-984d-ea8f94bd480f-kube-api-access-49ksq\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.042763 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.056810 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.068372 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:29:30 crc kubenswrapper[4793]: E0217 20:29:30.068872 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe64ae2-15f9-402d-984d-ea8f94bd480f" containerName="kube-state-metrics" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.068890 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe64ae2-15f9-402d-984d-ea8f94bd480f" containerName="kube-state-metrics" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.069094 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe64ae2-15f9-402d-984d-ea8f94bd480f" containerName="kube-state-metrics" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.069781 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.072167 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.072278 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.079458 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.084287 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.151159 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.163558 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7r8s\" (UniqueName: \"kubernetes.io/projected/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-api-access-m7r8s\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.163620 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.163707 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.163884 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.265792 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.265873 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7r8s\" (UniqueName: \"kubernetes.io/projected/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-api-access-m7r8s\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.265907 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.265992 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.270492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.271556 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.279354 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.290241 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7r8s\" (UniqueName: \"kubernetes.io/projected/8be5ebd1-58b5-40e0-949f-1479050446e0-kube-api-access-m7r8s\") pod \"kube-state-metrics-0\" (UID: \"8be5ebd1-58b5-40e0-949f-1479050446e0\") " pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.398010 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 20:29:30 crc kubenswrapper[4793]: W0217 20:29:30.872746 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8be5ebd1_58b5_40e0_949f_1479050446e0.slice/crio-812b13b7433b751d4614c0dd0cc0adb5cf182c88e2cdcf14d58bed6964a43b7b WatchSource:0}: Error finding container 812b13b7433b751d4614c0dd0cc0adb5cf182c88e2cdcf14d58bed6964a43b7b: Status 404 returned error can't find the container with id 812b13b7433b751d4614c0dd0cc0adb5cf182c88e2cdcf14d58bed6964a43b7b Feb 17 20:29:30 crc kubenswrapper[4793]: I0217 20:29:30.873403 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.048184 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.048753 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-central-agent" containerID="cri-o://32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2" gracePeriod=30 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.048850 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="sg-core" containerID="cri-o://a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad" gracePeriod=30 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.048870 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-notification-agent" containerID="cri-o://886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2" gracePeriod=30 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.048811 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="proxy-httpd" containerID="cri-o://a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33" gracePeriod=30 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.219442 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.219495 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.550675 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe64ae2-15f9-402d-984d-ea8f94bd480f" path="/var/lib/kubelet/pods/abe64ae2-15f9-402d-984d-ea8f94bd480f/volumes" Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.728366 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerID="a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33" exitCode=0 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.728442 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerID="a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad" exitCode=2 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.728460 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerID="32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2" exitCode=0 Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.728506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerDied","Data":"a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33"} Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.728572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerDied","Data":"a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad"} Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.728596 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerDied","Data":"32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2"} Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.730840 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8be5ebd1-58b5-40e0-949f-1479050446e0","Type":"ContainerStarted","Data":"c244fc219988b5166fa84232c39b0933f63fffca94ea494f03d19ce1eaa0c6d8"} Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.730919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8be5ebd1-58b5-40e0-949f-1479050446e0","Type":"ContainerStarted","Data":"812b13b7433b751d4614c0dd0cc0adb5cf182c88e2cdcf14d58bed6964a43b7b"} Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.731386 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 20:29:31 crc kubenswrapper[4793]: I0217 20:29:31.758554 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.317698637 podStartE2EDuration="1.758533971s" podCreationTimestamp="2026-02-17 20:29:30 +0000 UTC" firstStartedPulling="2026-02-17 20:29:30.878191005 +0000 UTC m=+1246.169889316" lastFinishedPulling="2026-02-17 20:29:31.319026339 +0000 UTC m=+1246.610724650" observedRunningTime="2026-02-17 20:29:31.754844369 +0000 UTC m=+1247.046542690" watchObservedRunningTime="2026-02-17 20:29:31.758533971 +0000 UTC m=+1247.050232282" Feb 17 20:29:32 crc kubenswrapper[4793]: I0217 20:29:32.267881 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 20:29:32 crc kubenswrapper[4793]: I0217 20:29:32.267909 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.218:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:29:34 crc kubenswrapper[4793]: I0217 20:29:34.571493 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:29:34 crc kubenswrapper[4793]: I0217 20:29:34.571790 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:29:35 crc kubenswrapper[4793]: I0217 20:29:35.151498 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 20:29:35 crc kubenswrapper[4793]: I0217 20:29:35.178321 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 20:29:35 crc kubenswrapper[4793]: I0217 20:29:35.660099 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 20:29:35 crc kubenswrapper[4793]: I0217 20:29:35.660096 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 20:29:35 crc kubenswrapper[4793]: I0217 20:29:35.805000 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.419255 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.538679 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.539009 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543063 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-config-data\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543119 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-scripts\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543320 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-run-httpd\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543365 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-combined-ca-bundle\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-sg-core-conf-yaml\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543445 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-log-httpd\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543510 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppxwp\" (UniqueName: \"kubernetes.io/projected/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-kube-api-access-ppxwp\") pod \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\" (UID: \"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3\") " Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543777 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.543864 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.544085 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.544103 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.557494 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-scripts" (OuterVolumeSpecName: "scripts") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.561900 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-kube-api-access-ppxwp" (OuterVolumeSpecName: "kube-api-access-ppxwp") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "kube-api-access-ppxwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.571206 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.636369 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.646812 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.646837 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.646846 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppxwp\" (UniqueName: \"kubernetes.io/projected/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-kube-api-access-ppxwp\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.646856 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.662898 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-config-data" (OuterVolumeSpecName: "config-data") pod "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" (UID: "ed59c12a-71e1-4c06-a7c7-a6fbb904afb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.748416 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.795218 4793 generic.go:334] "Generic (PLEG): container finished" podID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerID="886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2" exitCode=0 Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.795257 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerDied","Data":"886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2"} Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.795282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed59c12a-71e1-4c06-a7c7-a6fbb904afb3","Type":"ContainerDied","Data":"704c49f8e047b3c360458475c37f35e4cf47f8374fbbe30c1db83da35a516104"} Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.795299 4793 scope.go:117] "RemoveContainer" containerID="a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.795411 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.830667 4793 scope.go:117] "RemoveContainer" containerID="a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.834037 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.854120 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.860357 4793 scope.go:117] "RemoveContainer" containerID="886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.874438 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.875350 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-central-agent" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.875432 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-central-agent" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.875500 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-notification-agent" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.875571 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-notification-agent" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.875642 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="proxy-httpd" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.875722 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="proxy-httpd" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.875797 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="sg-core" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.875852 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="sg-core" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.876075 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-central-agent" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.876139 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="proxy-httpd" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.876212 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="ceilometer-notification-agent" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.876285 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" containerName="sg-core" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.878216 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.885403 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.885604 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.885764 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.889084 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.898621 4793 scope.go:117] "RemoveContainer" containerID="32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.926299 4793 scope.go:117] "RemoveContainer" containerID="a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.926823 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33\": container with ID starting with a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33 not found: ID does not exist" containerID="a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.926878 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33"} err="failed to get container status \"a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33\": rpc error: code = NotFound desc = could not find container \"a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33\": container with ID starting with a70ddd3c894c3bb2b87f13ece378dab7614dee0bd48bebd72c0166a2186abe33 not found: ID does not exist" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.926913 4793 scope.go:117] "RemoveContainer" containerID="a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.927308 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad\": container with ID starting with a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad not found: ID does not exist" containerID="a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.927342 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad"} err="failed to get container status \"a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad\": rpc error: code = NotFound desc = could not find container \"a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad\": container with ID starting with a09a66f63a91716b600f65cf4603232d01407e99a77478bd070989af396126ad not found: ID does not exist" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.927368 4793 scope.go:117] "RemoveContainer" containerID="886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.927742 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2\": container with ID starting with 886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2 not found: ID does not exist" containerID="886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.927776 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2"} err="failed to get container status \"886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2\": rpc error: code = NotFound desc = could not find container \"886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2\": container with ID starting with 886d28897251bfbc17d71fd50ee605752ee4ff31a769a319d0fc49751b2807d2 not found: ID does not exist" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.927797 4793 scope.go:117] "RemoveContainer" containerID="32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2" Feb 17 20:29:38 crc kubenswrapper[4793]: E0217 20:29:38.928126 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2\": container with ID starting with 32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2 not found: ID does not exist" containerID="32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.928158 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2"} err="failed to get container status \"32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2\": rpc error: code = NotFound desc = could not find container \"32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2\": container with ID starting with 32f9f02da4dd5d35b6771e9b60479f49561cab1e6d3da381903d74502f1a56c2 not found: ID does not exist" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.951788 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.951871 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.951897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-log-httpd\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.951950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2zg6\" (UniqueName: \"kubernetes.io/projected/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-kube-api-access-w2zg6\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.952212 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-config-data\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.952457 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-run-httpd\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.952657 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:38 crc kubenswrapper[4793]: I0217 20:29:38.952795 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-scripts\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-config-data\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-run-httpd\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054144 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054167 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-scripts\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054218 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054252 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-log-httpd\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2zg6\" (UniqueName: \"kubernetes.io/projected/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-kube-api-access-w2zg6\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054621 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-run-httpd\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.054983 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-log-httpd\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.065677 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.065787 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.066135 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-scripts\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.066900 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-config-data\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.069173 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.073120 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2zg6\" (UniqueName: \"kubernetes.io/projected/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-kube-api-access-w2zg6\") pod \"ceilometer-0\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.210906 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.580087 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed59c12a-71e1-4c06-a7c7-a6fbb904afb3" path="/var/lib/kubelet/pods/ed59c12a-71e1-4c06-a7c7-a6fbb904afb3/volumes" Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.726090 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:39 crc kubenswrapper[4793]: W0217 20:29:39.737219 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82bf9c51_70a8_42f2_9e28_9be9ee9284ef.slice/crio-892881a77f82f51f63632202cbf4bc9ffc591b458109225fbadf2882773e796f WatchSource:0}: Error finding container 892881a77f82f51f63632202cbf4bc9ffc591b458109225fbadf2882773e796f: Status 404 returned error can't find the container with id 892881a77f82f51f63632202cbf4bc9ffc591b458109225fbadf2882773e796f Feb 17 20:29:39 crc kubenswrapper[4793]: I0217 20:29:39.804431 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerStarted","Data":"892881a77f82f51f63632202cbf4bc9ffc591b458109225fbadf2882773e796f"} Feb 17 20:29:40 crc kubenswrapper[4793]: I0217 20:29:40.420298 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 20:29:40 crc kubenswrapper[4793]: I0217 20:29:40.817793 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerStarted","Data":"adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec"} Feb 17 20:29:40 crc kubenswrapper[4793]: I0217 20:29:40.818199 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerStarted","Data":"48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1"} Feb 17 20:29:41 crc kubenswrapper[4793]: I0217 20:29:41.225649 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 20:29:41 crc kubenswrapper[4793]: I0217 20:29:41.230047 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 20:29:41 crc kubenswrapper[4793]: I0217 20:29:41.258879 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 20:29:41 crc kubenswrapper[4793]: I0217 20:29:41.830933 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerStarted","Data":"7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1"} Feb 17 20:29:41 crc kubenswrapper[4793]: I0217 20:29:41.843912 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.842355 4793 generic.go:334] "Generic (PLEG): container finished" podID="6e2bb214-3f4a-45a9-bdc5-01d75750d494" containerID="bbea6c983542a633f05b4da44d62ff2358fd2033212474a5abb090bb6f305449" exitCode=137 Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.842515 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e2bb214-3f4a-45a9-bdc5-01d75750d494","Type":"ContainerDied","Data":"bbea6c983542a633f05b4da44d62ff2358fd2033212474a5abb090bb6f305449"} Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.842820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e2bb214-3f4a-45a9-bdc5-01d75750d494","Type":"ContainerDied","Data":"d74fd59d2babd09cf2df34cc73696ce94aac4745ef65208f7e4126861dd4c97f"} Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.842838 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d74fd59d2babd09cf2df34cc73696ce94aac4745ef65208f7e4126861dd4c97f" Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.854194 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.931491 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-combined-ca-bundle\") pod \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.931935 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8bb\" (UniqueName: \"kubernetes.io/projected/6e2bb214-3f4a-45a9-bdc5-01d75750d494-kube-api-access-kw8bb\") pod \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.931996 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-config-data\") pod \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\" (UID: \"6e2bb214-3f4a-45a9-bdc5-01d75750d494\") " Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.955653 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2bb214-3f4a-45a9-bdc5-01d75750d494-kube-api-access-kw8bb" (OuterVolumeSpecName: "kube-api-access-kw8bb") pod "6e2bb214-3f4a-45a9-bdc5-01d75750d494" (UID: "6e2bb214-3f4a-45a9-bdc5-01d75750d494"). InnerVolumeSpecName "kube-api-access-kw8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.961826 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e2bb214-3f4a-45a9-bdc5-01d75750d494" (UID: "6e2bb214-3f4a-45a9-bdc5-01d75750d494"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:42 crc kubenswrapper[4793]: I0217 20:29:42.964817 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-config-data" (OuterVolumeSpecName: "config-data") pod "6e2bb214-3f4a-45a9-bdc5-01d75750d494" (UID: "6e2bb214-3f4a-45a9-bdc5-01d75750d494"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.035633 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.035738 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e2bb214-3f4a-45a9-bdc5-01d75750d494-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.035766 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8bb\" (UniqueName: \"kubernetes.io/projected/6e2bb214-3f4a-45a9-bdc5-01d75750d494-kube-api-access-kw8bb\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.890948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerStarted","Data":"2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0"} Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.891008 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.890958 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.922112 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.963724999 podStartE2EDuration="5.922095587s" podCreationTimestamp="2026-02-17 20:29:38 +0000 UTC" firstStartedPulling="2026-02-17 20:29:39.739814313 +0000 UTC m=+1255.031512624" lastFinishedPulling="2026-02-17 20:29:42.698184901 +0000 UTC m=+1257.989883212" observedRunningTime="2026-02-17 20:29:43.918291883 +0000 UTC m=+1259.209990194" watchObservedRunningTime="2026-02-17 20:29:43.922095587 +0000 UTC m=+1259.213793898" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.935855 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.943771 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.960660 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:43 crc kubenswrapper[4793]: E0217 20:29:43.961210 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2bb214-3f4a-45a9-bdc5-01d75750d494" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.961234 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2bb214-3f4a-45a9-bdc5-01d75750d494" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.961430 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2bb214-3f4a-45a9-bdc5-01d75750d494" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.962136 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.965079 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.965316 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.965440 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 20:29:43 crc kubenswrapper[4793]: I0217 20:29:43.975017 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.056769 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.056829 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.056872 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.056965 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqk2t\" (UniqueName: \"kubernetes.io/projected/a996735f-6c26-4277-a888-1431e93e4d9f-kube-api-access-cqk2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.057005 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.159224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.159291 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.159343 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.159468 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqk2t\" (UniqueName: \"kubernetes.io/projected/a996735f-6c26-4277-a888-1431e93e4d9f-kube-api-access-cqk2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.159513 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.166276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.166328 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.166614 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.167787 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a996735f-6c26-4277-a888-1431e93e4d9f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.177171 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqk2t\" (UniqueName: \"kubernetes.io/projected/a996735f-6c26-4277-a888-1431e93e4d9f-kube-api-access-cqk2t\") pod \"nova-cell1-novncproxy-0\" (UID: \"a996735f-6c26-4277-a888-1431e93e4d9f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:44 crc kubenswrapper[4793]: I0217 20:29:44.291263 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.587600 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.589175 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.594734 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.618303 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.779948 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.902599 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a996735f-6c26-4277-a888-1431e93e4d9f","Type":"ContainerStarted","Data":"47fb92b305a4cebea1fcd678bacd6b14a90fbf72d11ef961902f22952e3c3314"} Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.903263 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:44.913760 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.084973 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb4f6457c-ffpdz"] Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.086638 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.122887 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb4f6457c-ffpdz"] Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.186745 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-svc\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.186782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxw8v\" (UniqueName: \"kubernetes.io/projected/24051b7d-6e2d-41f9-b38b-e5afb3937af9-kube-api-access-vxw8v\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.186814 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.186833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.187033 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-config\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.187111 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.288481 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.288620 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxw8v\" (UniqueName: \"kubernetes.io/projected/24051b7d-6e2d-41f9-b38b-e5afb3937af9-kube-api-access-vxw8v\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.288641 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-svc\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.288664 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.288698 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.288739 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-config\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.290226 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-config\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.290228 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-svc\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.290266 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.291595 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-swift-storage-0\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.292197 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.325481 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxw8v\" (UniqueName: \"kubernetes.io/projected/24051b7d-6e2d-41f9-b38b-e5afb3937af9-kube-api-access-vxw8v\") pod \"dnsmasq-dns-6cb4f6457c-ffpdz\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.431472 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.562142 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2bb214-3f4a-45a9-bdc5-01d75750d494" path="/var/lib/kubelet/pods/6e2bb214-3f4a-45a9-bdc5-01d75750d494/volumes" Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.839572 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb4f6457c-ffpdz"] Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.918294 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" event={"ID":"24051b7d-6e2d-41f9-b38b-e5afb3937af9","Type":"ContainerStarted","Data":"50f845e2d811259ab3edda05523a296190dda632e30e329af9a529f90a2d192a"} Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.924049 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a996735f-6c26-4277-a888-1431e93e4d9f","Type":"ContainerStarted","Data":"9008d6a6955339c64560cffe8ef2aa4ab1cabf9fdfb1b755510df732655ed6b9"} Feb 17 20:29:45 crc kubenswrapper[4793]: I0217 20:29:45.958010 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.957992435 podStartE2EDuration="2.957992435s" podCreationTimestamp="2026-02-17 20:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:45.954013606 +0000 UTC m=+1261.245711927" watchObservedRunningTime="2026-02-17 20:29:45.957992435 +0000 UTC m=+1261.249690746" Feb 17 20:29:46 crc kubenswrapper[4793]: I0217 20:29:46.931552 4793 generic.go:334] "Generic (PLEG): container finished" podID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerID="230e80f95b84e476421f8af18e8658b8d127727464a442cc06d1ad632b99e031" exitCode=0 Feb 17 20:29:46 crc kubenswrapper[4793]: I0217 20:29:46.932038 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" event={"ID":"24051b7d-6e2d-41f9-b38b-e5afb3937af9","Type":"ContainerDied","Data":"230e80f95b84e476421f8af18e8658b8d127727464a442cc06d1ad632b99e031"} Feb 17 20:29:47 crc kubenswrapper[4793]: I0217 20:29:47.595051 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:47 crc kubenswrapper[4793]: I0217 20:29:47.942186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" event={"ID":"24051b7d-6e2d-41f9-b38b-e5afb3937af9","Type":"ContainerStarted","Data":"40377059c240f331d382e77328c9033e9d4a3da20c0da77e8e3b0cf6804a24b0"} Feb 17 20:29:47 crc kubenswrapper[4793]: I0217 20:29:47.942327 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-log" containerID="cri-o://4672a37616a48879f59fd01b775964f155e7d2c499f30ec69edde0d20918726c" gracePeriod=30 Feb 17 20:29:47 crc kubenswrapper[4793]: I0217 20:29:47.942399 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-api" containerID="cri-o://70d60013492c327021c6529644ea6e2b0cd551f38a8a1c0273c0f806843f39a4" gracePeriod=30 Feb 17 20:29:47 crc kubenswrapper[4793]: I0217 20:29:47.967453 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" podStartSLOduration=2.9674314859999997 podStartE2EDuration="2.967431486s" podCreationTimestamp="2026-02-17 20:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:47.961852817 +0000 UTC m=+1263.253551148" watchObservedRunningTime="2026-02-17 20:29:47.967431486 +0000 UTC m=+1263.259129797" Feb 17 20:29:48 crc kubenswrapper[4793]: I0217 20:29:48.952712 4793 generic.go:334] "Generic (PLEG): container finished" podID="59782ccb-ca76-4c65-9782-096a6122636d" containerID="70d60013492c327021c6529644ea6e2b0cd551f38a8a1c0273c0f806843f39a4" exitCode=0 Feb 17 20:29:48 crc kubenswrapper[4793]: I0217 20:29:48.952744 4793 generic.go:334] "Generic (PLEG): container finished" podID="59782ccb-ca76-4c65-9782-096a6122636d" containerID="4672a37616a48879f59fd01b775964f155e7d2c499f30ec69edde0d20918726c" exitCode=143 Feb 17 20:29:48 crc kubenswrapper[4793]: I0217 20:29:48.953157 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59782ccb-ca76-4c65-9782-096a6122636d","Type":"ContainerDied","Data":"70d60013492c327021c6529644ea6e2b0cd551f38a8a1c0273c0f806843f39a4"} Feb 17 20:29:48 crc kubenswrapper[4793]: I0217 20:29:48.953210 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59782ccb-ca76-4c65-9782-096a6122636d","Type":"ContainerDied","Data":"4672a37616a48879f59fd01b775964f155e7d2c499f30ec69edde0d20918726c"} Feb 17 20:29:48 crc kubenswrapper[4793]: I0217 20:29:48.953620 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.292760 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.308922 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.423441 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59782ccb-ca76-4c65-9782-096a6122636d-logs\") pod \"59782ccb-ca76-4c65-9782-096a6122636d\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.423479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv42m\" (UniqueName: \"kubernetes.io/projected/59782ccb-ca76-4c65-9782-096a6122636d-kube-api-access-jv42m\") pod \"59782ccb-ca76-4c65-9782-096a6122636d\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.423711 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-config-data\") pod \"59782ccb-ca76-4c65-9782-096a6122636d\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.423739 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-combined-ca-bundle\") pod \"59782ccb-ca76-4c65-9782-096a6122636d\" (UID: \"59782ccb-ca76-4c65-9782-096a6122636d\") " Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.423971 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59782ccb-ca76-4c65-9782-096a6122636d-logs" (OuterVolumeSpecName: "logs") pod "59782ccb-ca76-4c65-9782-096a6122636d" (UID: "59782ccb-ca76-4c65-9782-096a6122636d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.424262 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59782ccb-ca76-4c65-9782-096a6122636d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.443571 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59782ccb-ca76-4c65-9782-096a6122636d-kube-api-access-jv42m" (OuterVolumeSpecName: "kube-api-access-jv42m") pod "59782ccb-ca76-4c65-9782-096a6122636d" (UID: "59782ccb-ca76-4c65-9782-096a6122636d"). InnerVolumeSpecName "kube-api-access-jv42m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.477042 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-config-data" (OuterVolumeSpecName: "config-data") pod "59782ccb-ca76-4c65-9782-096a6122636d" (UID: "59782ccb-ca76-4c65-9782-096a6122636d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.499943 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59782ccb-ca76-4c65-9782-096a6122636d" (UID: "59782ccb-ca76-4c65-9782-096a6122636d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.503941 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.504683 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-central-agent" containerID="cri-o://48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1" gracePeriod=30 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.505656 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="proxy-httpd" containerID="cri-o://2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0" gracePeriod=30 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.505711 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-notification-agent" containerID="cri-o://adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec" gracePeriod=30 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.505783 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="sg-core" containerID="cri-o://7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1" gracePeriod=30 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.526130 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.526159 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59782ccb-ca76-4c65-9782-096a6122636d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.526169 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv42m\" (UniqueName: \"kubernetes.io/projected/59782ccb-ca76-4c65-9782-096a6122636d-kube-api-access-jv42m\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.540602 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:29:49 crc kubenswrapper[4793]: E0217 20:29:49.540865 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.969373 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59782ccb-ca76-4c65-9782-096a6122636d","Type":"ContainerDied","Data":"fd980cbb7022f4c835596247301705668852aef591c9bc29bf53e1dbf6c17dd5"} Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.969450 4793 scope.go:117] "RemoveContainer" containerID="70d60013492c327021c6529644ea6e2b0cd551f38a8a1c0273c0f806843f39a4" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.969447 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.973687 4793 generic.go:334] "Generic (PLEG): container finished" podID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerID="2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0" exitCode=0 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.973737 4793 generic.go:334] "Generic (PLEG): container finished" podID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerID="7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1" exitCode=2 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.973744 4793 generic.go:334] "Generic (PLEG): container finished" podID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerID="48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1" exitCode=0 Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.973830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerDied","Data":"2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0"} Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.973879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerDied","Data":"7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1"} Feb 17 20:29:49 crc kubenswrapper[4793]: I0217 20:29:49.973893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerDied","Data":"48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1"} Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.006605 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.007682 4793 scope.go:117] "RemoveContainer" containerID="4672a37616a48879f59fd01b775964f155e7d2c499f30ec69edde0d20918726c" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.027012 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.047370 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:50 crc kubenswrapper[4793]: E0217 20:29:50.047826 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-log" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.047841 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-log" Feb 17 20:29:50 crc kubenswrapper[4793]: E0217 20:29:50.047857 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-api" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.047864 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-api" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.048100 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-log" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.048114 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="59782ccb-ca76-4c65-9782-096a6122636d" containerName="nova-api-api" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.049187 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.051577 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.051827 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.051924 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.062175 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.101565 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.101659 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.138985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrq6m\" (UniqueName: \"kubernetes.io/projected/55f15754-c913-412a-a53c-ee7eebad65a0-kube-api-access-rrq6m\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.139050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.139091 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.139118 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-config-data\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.139142 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55f15754-c913-412a-a53c-ee7eebad65a0-logs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.139209 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-public-tls-certs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.241795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.241986 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.242058 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-config-data\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.243201 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55f15754-c913-412a-a53c-ee7eebad65a0-logs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.243365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-public-tls-certs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.243522 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrq6m\" (UniqueName: \"kubernetes.io/projected/55f15754-c913-412a-a53c-ee7eebad65a0-kube-api-access-rrq6m\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.248831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.249828 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55f15754-c913-412a-a53c-ee7eebad65a0-logs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.256219 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-public-tls-certs\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.274930 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.281245 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrq6m\" (UniqueName: \"kubernetes.io/projected/55f15754-c913-412a-a53c-ee7eebad65a0-kube-api-access-rrq6m\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.281902 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-config-data\") pod \"nova-api-0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.375150 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:29:50 crc kubenswrapper[4793]: W0217 20:29:50.950521 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f15754_c913_412a_a53c_ee7eebad65a0.slice/crio-971c7cd82b6dd95f13fbe0011588fb20ab6a664ae97f2d516c9799728cc0d53d WatchSource:0}: Error finding container 971c7cd82b6dd95f13fbe0011588fb20ab6a664ae97f2d516c9799728cc0d53d: Status 404 returned error can't find the container with id 971c7cd82b6dd95f13fbe0011588fb20ab6a664ae97f2d516c9799728cc0d53d Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.957145 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:29:50 crc kubenswrapper[4793]: I0217 20:29:50.985414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55f15754-c913-412a-a53c-ee7eebad65a0","Type":"ContainerStarted","Data":"971c7cd82b6dd95f13fbe0011588fb20ab6a664ae97f2d516c9799728cc0d53d"} Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.549136 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59782ccb-ca76-4c65-9782-096a6122636d" path="/var/lib/kubelet/pods/59782ccb-ca76-4c65-9782-096a6122636d/volumes" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.572529 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.722470 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-sg-core-conf-yaml\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.722829 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-ceilometer-tls-certs\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.722869 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-scripts\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.722952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-combined-ca-bundle\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.723010 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-config-data\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.723029 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-log-httpd\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.723055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2zg6\" (UniqueName: \"kubernetes.io/projected/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-kube-api-access-w2zg6\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.723106 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-run-httpd\") pod \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\" (UID: \"82bf9c51-70a8-42f2-9e28-9be9ee9284ef\") " Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.724642 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.726102 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.729922 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-kube-api-access-w2zg6" (OuterVolumeSpecName: "kube-api-access-w2zg6") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "kube-api-access-w2zg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.731344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-scripts" (OuterVolumeSpecName: "scripts") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.765234 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.785451 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.818178 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827209 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827247 4793 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827257 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2zg6\" (UniqueName: \"kubernetes.io/projected/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-kube-api-access-w2zg6\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827267 4793 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827277 4793 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827289 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.827296 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.834269 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-config-data" (OuterVolumeSpecName: "config-data") pod "82bf9c51-70a8-42f2-9e28-9be9ee9284ef" (UID: "82bf9c51-70a8-42f2-9e28-9be9ee9284ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.930102 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bf9c51-70a8-42f2-9e28-9be9ee9284ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.996389 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55f15754-c913-412a-a53c-ee7eebad65a0","Type":"ContainerStarted","Data":"cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e"} Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.996427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55f15754-c913-412a-a53c-ee7eebad65a0","Type":"ContainerStarted","Data":"30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061"} Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.999835 4793 generic.go:334] "Generic (PLEG): container finished" podID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerID="adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec" exitCode=0 Feb 17 20:29:51 crc kubenswrapper[4793]: I0217 20:29:51.999949 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:51.999965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerDied","Data":"adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec"} Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.000094 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82bf9c51-70a8-42f2-9e28-9be9ee9284ef","Type":"ContainerDied","Data":"892881a77f82f51f63632202cbf4bc9ffc591b458109225fbadf2882773e796f"} Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.000157 4793 scope.go:117] "RemoveContainer" containerID="2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.027994 4793 scope.go:117] "RemoveContainer" containerID="7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.032228 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.032205257 podStartE2EDuration="2.032205257s" podCreationTimestamp="2026-02-17 20:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:52.017670936 +0000 UTC m=+1267.309369247" watchObservedRunningTime="2026-02-17 20:29:52.032205257 +0000 UTC m=+1267.323903578" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.050530 4793 scope.go:117] "RemoveContainer" containerID="adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.053573 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.071477 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.104991 4793 scope.go:117] "RemoveContainer" containerID="48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144097 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.144504 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-notification-agent" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144520 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-notification-agent" Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.144551 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-central-agent" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144556 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-central-agent" Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.144566 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="proxy-httpd" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144572 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="proxy-httpd" Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.144592 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="sg-core" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144599 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="sg-core" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144800 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="sg-core" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144827 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-notification-agent" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144840 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="proxy-httpd" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.144850 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" containerName="ceilometer-central-agent" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.146866 4793 scope.go:117] "RemoveContainer" containerID="2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.147018 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.153460 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.154482 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0\": container with ID starting with 2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0 not found: ID does not exist" containerID="2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.154532 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0"} err="failed to get container status \"2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0\": rpc error: code = NotFound desc = could not find container \"2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0\": container with ID starting with 2323b790225bcae3e1da84ce5190df2e758fa1b0e7589f11ad43c3ac84b695c0 not found: ID does not exist" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.154567 4793 scope.go:117] "RemoveContainer" containerID="7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.154895 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.155447 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.155460 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1\": container with ID starting with 7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1 not found: ID does not exist" containerID="7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.155657 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1"} err="failed to get container status \"7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1\": rpc error: code = NotFound desc = could not find container \"7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1\": container with ID starting with 7e0306894ed86bff0b33e4605a356f1d08bf2b6f86d6d9c74cd0128939a434f1 not found: ID does not exist" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.155678 4793 scope.go:117] "RemoveContainer" containerID="adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.157809 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.160035 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec\": container with ID starting with adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec not found: ID does not exist" containerID="adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.160143 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec"} err="failed to get container status \"adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec\": rpc error: code = NotFound desc = could not find container \"adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec\": container with ID starting with adac381a5a534d66b4dd89d1965a4404837813734b7f43b9943424ffddb034ec not found: ID does not exist" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.160267 4793 scope.go:117] "RemoveContainer" containerID="48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1" Feb 17 20:29:52 crc kubenswrapper[4793]: E0217 20:29:52.161555 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1\": container with ID starting with 48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1 not found: ID does not exist" containerID="48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.161580 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1"} err="failed to get container status \"48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1\": rpc error: code = NotFound desc = could not find container \"48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1\": container with ID starting with 48d7fd4e28504ecda522289936fcf7493f90b8e9c995efa0b20bf74b72fd7fa1 not found: ID does not exist" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.342897 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.342941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.342976 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsvv\" (UniqueName: \"kubernetes.io/projected/edb6897b-0bd5-46f3-bba0-fea881577a8f-kube-api-access-nbsvv\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.343022 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-config-data\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.343302 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb6897b-0bd5-46f3-bba0-fea881577a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.343350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb6897b-0bd5-46f3-bba0-fea881577a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.343463 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-scripts\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.343499 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445097 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-scripts\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445465 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445523 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445543 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445591 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbsvv\" (UniqueName: \"kubernetes.io/projected/edb6897b-0bd5-46f3-bba0-fea881577a8f-kube-api-access-nbsvv\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445655 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-config-data\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb6897b-0bd5-46f3-bba0-fea881577a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.445817 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb6897b-0bd5-46f3-bba0-fea881577a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.446553 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb6897b-0bd5-46f3-bba0-fea881577a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.446613 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edb6897b-0bd5-46f3-bba0-fea881577a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.451225 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-scripts\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.451650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.456997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-config-data\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.457786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.458474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edb6897b-0bd5-46f3-bba0-fea881577a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.464372 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbsvv\" (UniqueName: \"kubernetes.io/projected/edb6897b-0bd5-46f3-bba0-fea881577a8f-kube-api-access-nbsvv\") pod \"ceilometer-0\" (UID: \"edb6897b-0bd5-46f3-bba0-fea881577a8f\") " pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.468118 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 20:29:52 crc kubenswrapper[4793]: W0217 20:29:52.925494 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedb6897b_0bd5_46f3_bba0_fea881577a8f.slice/crio-5cdbf1405c61c05f87744d717854933a8593ff4a7fe79bc183d4a3827b49c6f3 WatchSource:0}: Error finding container 5cdbf1405c61c05f87744d717854933a8593ff4a7fe79bc183d4a3827b49c6f3: Status 404 returned error can't find the container with id 5cdbf1405c61c05f87744d717854933a8593ff4a7fe79bc183d4a3827b49c6f3 Feb 17 20:29:52 crc kubenswrapper[4793]: I0217 20:29:52.926458 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 20:29:53 crc kubenswrapper[4793]: I0217 20:29:53.014493 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edb6897b-0bd5-46f3-bba0-fea881577a8f","Type":"ContainerStarted","Data":"5cdbf1405c61c05f87744d717854933a8593ff4a7fe79bc183d4a3827b49c6f3"} Feb 17 20:29:53 crc kubenswrapper[4793]: I0217 20:29:53.551333 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bf9c51-70a8-42f2-9e28-9be9ee9284ef" path="/var/lib/kubelet/pods/82bf9c51-70a8-42f2-9e28-9be9ee9284ef/volumes" Feb 17 20:29:54 crc kubenswrapper[4793]: I0217 20:29:54.027381 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edb6897b-0bd5-46f3-bba0-fea881577a8f","Type":"ContainerStarted","Data":"26ffe6375e12c4b5c767481f160cb858d1dfe1560a31a786c7f2825883f47d52"} Feb 17 20:29:54 crc kubenswrapper[4793]: I0217 20:29:54.027422 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edb6897b-0bd5-46f3-bba0-fea881577a8f","Type":"ContainerStarted","Data":"790255c88a1ab7865536a9b9b9d9279d6f466d54fe91b74bf1073fea36288b4b"} Feb 17 20:29:54 crc kubenswrapper[4793]: I0217 20:29:54.291872 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:54 crc kubenswrapper[4793]: I0217 20:29:54.308612 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.056810 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edb6897b-0bd5-46f3-bba0-fea881577a8f","Type":"ContainerStarted","Data":"4b2329d98e5ad54468ca14e87299ac1bcb1107945cb83f904aa4e45953c7fa40"} Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.077842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.255473 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tnp4t"] Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.257185 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.263425 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.263643 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.271591 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnp4t"] Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.402340 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-config-data\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.402449 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-scripts\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.402508 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc66f\" (UniqueName: \"kubernetes.io/projected/7c590626-10f6-4866-afb8-a765d8692f9f-kube-api-access-tc66f\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.402540 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.433830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.505302 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-config-data\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.505405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-scripts\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.505457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc66f\" (UniqueName: \"kubernetes.io/projected/7c590626-10f6-4866-afb8-a765d8692f9f-kube-api-access-tc66f\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.529638 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.543786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-scripts\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.564979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.569996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-config-data\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.584290 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc66f\" (UniqueName: \"kubernetes.io/projected/7c590626-10f6-4866-afb8-a765d8692f9f-kube-api-access-tc66f\") pod \"nova-cell1-cell-mapping-tnp4t\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.609518 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68fc58f487-92zf8"] Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.609759 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerName="dnsmasq-dns" containerID="cri-o://614feb6a377835fbafd17aef4eb9562fad0dea7793dced49c92e124b37cd0d7f" gracePeriod=10 Feb 17 20:29:55 crc kubenswrapper[4793]: I0217 20:29:55.782161 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.080984 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edb6897b-0bd5-46f3-bba0-fea881577a8f","Type":"ContainerStarted","Data":"a1cb82887493d17dfa686d8b430a74406a0587c6cc22f6c393ba18c9970b36e1"} Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.081387 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.086549 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerID="614feb6a377835fbafd17aef4eb9562fad0dea7793dced49c92e124b37cd0d7f" exitCode=0 Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.087767 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" event={"ID":"9ff06b45-da92-4c62-a974-7a51d30a16ed","Type":"ContainerDied","Data":"614feb6a377835fbafd17aef4eb9562fad0dea7793dced49c92e124b37cd0d7f"} Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.103523 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.104297 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7510726509999999 podStartE2EDuration="4.104265608s" podCreationTimestamp="2026-02-17 20:29:52 +0000 UTC" firstStartedPulling="2026-02-17 20:29:52.927863892 +0000 UTC m=+1268.219562203" lastFinishedPulling="2026-02-17 20:29:55.281056849 +0000 UTC m=+1270.572755160" observedRunningTime="2026-02-17 20:29:56.100597937 +0000 UTC m=+1271.392296248" watchObservedRunningTime="2026-02-17 20:29:56.104265608 +0000 UTC m=+1271.395963919" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.252359 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-swift-storage-0\") pod \"9ff06b45-da92-4c62-a974-7a51d30a16ed\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.252434 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-svc\") pod \"9ff06b45-da92-4c62-a974-7a51d30a16ed\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.252507 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-sb\") pod \"9ff06b45-da92-4c62-a974-7a51d30a16ed\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.252551 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc2d8\" (UniqueName: \"kubernetes.io/projected/9ff06b45-da92-4c62-a974-7a51d30a16ed-kube-api-access-sc2d8\") pod \"9ff06b45-da92-4c62-a974-7a51d30a16ed\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.252594 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-nb\") pod \"9ff06b45-da92-4c62-a974-7a51d30a16ed\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.252680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-config\") pod \"9ff06b45-da92-4c62-a974-7a51d30a16ed\" (UID: \"9ff06b45-da92-4c62-a974-7a51d30a16ed\") " Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.259687 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff06b45-da92-4c62-a974-7a51d30a16ed-kube-api-access-sc2d8" (OuterVolumeSpecName: "kube-api-access-sc2d8") pod "9ff06b45-da92-4c62-a974-7a51d30a16ed" (UID: "9ff06b45-da92-4c62-a974-7a51d30a16ed"). InnerVolumeSpecName "kube-api-access-sc2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.313559 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ff06b45-da92-4c62-a974-7a51d30a16ed" (UID: "9ff06b45-da92-4c62-a974-7a51d30a16ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.331363 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ff06b45-da92-4c62-a974-7a51d30a16ed" (UID: "9ff06b45-da92-4c62-a974-7a51d30a16ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.336289 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ff06b45-da92-4c62-a974-7a51d30a16ed" (UID: "9ff06b45-da92-4c62-a974-7a51d30a16ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.340268 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-config" (OuterVolumeSpecName: "config") pod "9ff06b45-da92-4c62-a974-7a51d30a16ed" (UID: "9ff06b45-da92-4c62-a974-7a51d30a16ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.353841 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ff06b45-da92-4c62-a974-7a51d30a16ed" (UID: "9ff06b45-da92-4c62-a974-7a51d30a16ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.360805 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.360843 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.360857 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc2d8\" (UniqueName: \"kubernetes.io/projected/9ff06b45-da92-4c62-a974-7a51d30a16ed-kube-api-access-sc2d8\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.360868 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.360881 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-config\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.360891 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff06b45-da92-4c62-a974-7a51d30a16ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 20:29:56 crc kubenswrapper[4793]: I0217 20:29:56.377459 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnp4t"] Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.105270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" event={"ID":"9ff06b45-da92-4c62-a974-7a51d30a16ed","Type":"ContainerDied","Data":"5f931c2d1791fedaea3ee1cdad57f8256b58023a418fbb529d74dd430540769f"} Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.105683 4793 scope.go:117] "RemoveContainer" containerID="614feb6a377835fbafd17aef4eb9562fad0dea7793dced49c92e124b37cd0d7f" Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.105548 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fc58f487-92zf8" Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.157875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnp4t" event={"ID":"7c590626-10f6-4866-afb8-a765d8692f9f","Type":"ContainerStarted","Data":"88bf108f9fe48bf18b0439c8057371e6e7939a7a3766867d8fedf06734a0827b"} Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.158085 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnp4t" event={"ID":"7c590626-10f6-4866-afb8-a765d8692f9f","Type":"ContainerStarted","Data":"d4cea8ab2228153d5fa896b8f26ad5e620a20289c490e1a6089ada8afe4ef0bd"} Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.244919 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tnp4t" podStartSLOduration=2.2448971699999998 podStartE2EDuration="2.24489717s" podCreationTimestamp="2026-02-17 20:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:29:57.202156449 +0000 UTC m=+1272.493854760" watchObservedRunningTime="2026-02-17 20:29:57.24489717 +0000 UTC m=+1272.536595481" Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.262915 4793 scope.go:117] "RemoveContainer" containerID="716312813c7a3a9bad9f6b90a61ec138b603d8ae4ad234f9204ad2991d62c17f" Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.277206 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68fc58f487-92zf8"] Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.294213 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68fc58f487-92zf8"] Feb 17 20:29:57 crc kubenswrapper[4793]: I0217 20:29:57.549190 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" path="/var/lib/kubelet/pods/9ff06b45-da92-4c62-a974-7a51d30a16ed/volumes" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.147522 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m"] Feb 17 20:30:00 crc kubenswrapper[4793]: E0217 20:30:00.148768 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerName="init" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.148783 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerName="init" Feb 17 20:30:00 crc kubenswrapper[4793]: E0217 20:30:00.148798 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerName="dnsmasq-dns" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.148804 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerName="dnsmasq-dns" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.148994 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff06b45-da92-4c62-a974-7a51d30a16ed" containerName="dnsmasq-dns" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.149729 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.151859 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.151856 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.160145 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m"] Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.248476 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvfhf\" (UniqueName: \"kubernetes.io/projected/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-kube-api-access-fvfhf\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.248618 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-config-volume\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.248647 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-secret-volume\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.351325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-config-volume\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.351403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-secret-volume\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.351793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvfhf\" (UniqueName: \"kubernetes.io/projected/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-kube-api-access-fvfhf\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.353105 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-config-volume\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.369532 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-secret-volume\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.375071 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvfhf\" (UniqueName: \"kubernetes.io/projected/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-kube-api-access-fvfhf\") pod \"collect-profiles-29522670-qld8m\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.375814 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.376613 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.473648 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:00 crc kubenswrapper[4793]: I0217 20:30:00.927781 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m"] Feb 17 20:30:01 crc kubenswrapper[4793]: I0217 20:30:01.197522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" event={"ID":"50b6d4ed-8f06-4ae1-aea1-772d90636d7e","Type":"ContainerStarted","Data":"e8eca17f02b52d6d94c88aa39c4ea84b14fbb91595dc3be70ff5e16a1651f79c"} Feb 17 20:30:01 crc kubenswrapper[4793]: I0217 20:30:01.197579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" event={"ID":"50b6d4ed-8f06-4ae1-aea1-772d90636d7e","Type":"ContainerStarted","Data":"501ea23f0bd6a6c7c0965abcce489748f57bbe72682f7234f9842120cffe3b0e"} Feb 17 20:30:01 crc kubenswrapper[4793]: I0217 20:30:01.224269 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" podStartSLOduration=1.2242505399999999 podStartE2EDuration="1.22425054s" podCreationTimestamp="2026-02-17 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:30:01.222722932 +0000 UTC m=+1276.514421253" watchObservedRunningTime="2026-02-17 20:30:01.22425054 +0000 UTC m=+1276.515948851" Feb 17 20:30:01 crc kubenswrapper[4793]: I0217 20:30:01.406876 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:30:01 crc kubenswrapper[4793]: I0217 20:30:01.406889 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:30:01 crc kubenswrapper[4793]: I0217 20:30:01.538977 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:30:01 crc kubenswrapper[4793]: E0217 20:30:01.539445 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:30:02 crc kubenswrapper[4793]: I0217 20:30:02.220473 4793 generic.go:334] "Generic (PLEG): container finished" podID="7c590626-10f6-4866-afb8-a765d8692f9f" containerID="88bf108f9fe48bf18b0439c8057371e6e7939a7a3766867d8fedf06734a0827b" exitCode=0 Feb 17 20:30:02 crc kubenswrapper[4793]: I0217 20:30:02.221140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnp4t" event={"ID":"7c590626-10f6-4866-afb8-a765d8692f9f","Type":"ContainerDied","Data":"88bf108f9fe48bf18b0439c8057371e6e7939a7a3766867d8fedf06734a0827b"} Feb 17 20:30:02 crc kubenswrapper[4793]: I0217 20:30:02.224365 4793 generic.go:334] "Generic (PLEG): container finished" podID="50b6d4ed-8f06-4ae1-aea1-772d90636d7e" containerID="e8eca17f02b52d6d94c88aa39c4ea84b14fbb91595dc3be70ff5e16a1651f79c" exitCode=0 Feb 17 20:30:02 crc kubenswrapper[4793]: I0217 20:30:02.224412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" event={"ID":"50b6d4ed-8f06-4ae1-aea1-772d90636d7e","Type":"ContainerDied","Data":"e8eca17f02b52d6d94c88aa39c4ea84b14fbb91595dc3be70ff5e16a1651f79c"} Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.686809 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.693067 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.837626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc66f\" (UniqueName: \"kubernetes.io/projected/7c590626-10f6-4866-afb8-a765d8692f9f-kube-api-access-tc66f\") pod \"7c590626-10f6-4866-afb8-a765d8692f9f\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.837677 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-scripts\") pod \"7c590626-10f6-4866-afb8-a765d8692f9f\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.837714 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-combined-ca-bundle\") pod \"7c590626-10f6-4866-afb8-a765d8692f9f\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.837759 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-secret-volume\") pod \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.837891 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-config-data\") pod \"7c590626-10f6-4866-afb8-a765d8692f9f\" (UID: \"7c590626-10f6-4866-afb8-a765d8692f9f\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.837926 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-config-volume\") pod \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.838546 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "50b6d4ed-8f06-4ae1-aea1-772d90636d7e" (UID: "50b6d4ed-8f06-4ae1-aea1-772d90636d7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.838051 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvfhf\" (UniqueName: \"kubernetes.io/projected/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-kube-api-access-fvfhf\") pod \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\" (UID: \"50b6d4ed-8f06-4ae1-aea1-772d90636d7e\") " Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.839165 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.843735 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50b6d4ed-8f06-4ae1-aea1-772d90636d7e" (UID: "50b6d4ed-8f06-4ae1-aea1-772d90636d7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.843799 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c590626-10f6-4866-afb8-a765d8692f9f-kube-api-access-tc66f" (OuterVolumeSpecName: "kube-api-access-tc66f") pod "7c590626-10f6-4866-afb8-a765d8692f9f" (UID: "7c590626-10f6-4866-afb8-a765d8692f9f"). InnerVolumeSpecName "kube-api-access-tc66f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.844218 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-scripts" (OuterVolumeSpecName: "scripts") pod "7c590626-10f6-4866-afb8-a765d8692f9f" (UID: "7c590626-10f6-4866-afb8-a765d8692f9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.848059 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-kube-api-access-fvfhf" (OuterVolumeSpecName: "kube-api-access-fvfhf") pod "50b6d4ed-8f06-4ae1-aea1-772d90636d7e" (UID: "50b6d4ed-8f06-4ae1-aea1-772d90636d7e"). InnerVolumeSpecName "kube-api-access-fvfhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.868235 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-config-data" (OuterVolumeSpecName: "config-data") pod "7c590626-10f6-4866-afb8-a765d8692f9f" (UID: "7c590626-10f6-4866-afb8-a765d8692f9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.876471 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c590626-10f6-4866-afb8-a765d8692f9f" (UID: "7c590626-10f6-4866-afb8-a765d8692f9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.940727 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.940758 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.940768 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvfhf\" (UniqueName: \"kubernetes.io/projected/50b6d4ed-8f06-4ae1-aea1-772d90636d7e-kube-api-access-fvfhf\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.940777 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc66f\" (UniqueName: \"kubernetes.io/projected/7c590626-10f6-4866-afb8-a765d8692f9f-kube-api-access-tc66f\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.940786 4793 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:03 crc kubenswrapper[4793]: I0217 20:30:03.940794 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c590626-10f6-4866-afb8-a765d8692f9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.253009 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tnp4t" event={"ID":"7c590626-10f6-4866-afb8-a765d8692f9f","Type":"ContainerDied","Data":"d4cea8ab2228153d5fa896b8f26ad5e620a20289c490e1a6089ada8afe4ef0bd"} Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.253095 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cea8ab2228153d5fa896b8f26ad5e620a20289c490e1a6089ada8afe4ef0bd" Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.253446 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tnp4t" Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.256442 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" event={"ID":"50b6d4ed-8f06-4ae1-aea1-772d90636d7e","Type":"ContainerDied","Data":"501ea23f0bd6a6c7c0965abcce489748f57bbe72682f7234f9842120cffe3b0e"} Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.256489 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="501ea23f0bd6a6c7c0965abcce489748f57bbe72682f7234f9842120cffe3b0e" Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.256561 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m" Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.443955 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.444486 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-log" containerID="cri-o://30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061" gracePeriod=30 Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.444631 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-api" containerID="cri-o://cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e" gracePeriod=30 Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.471447 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.471652 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" containerName="nova-scheduler-scheduler" containerID="cri-o://42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff" gracePeriod=30 Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.524046 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.524291 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-log" containerID="cri-o://4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807" gracePeriod=30 Feb 17 20:30:04 crc kubenswrapper[4793]: I0217 20:30:04.524459 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-metadata" containerID="cri-o://18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e" gracePeriod=30 Feb 17 20:30:05 crc kubenswrapper[4793]: E0217 20:30:05.157346 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 20:30:05 crc kubenswrapper[4793]: E0217 20:30:05.159189 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 20:30:05 crc kubenswrapper[4793]: E0217 20:30:05.167538 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 20:30:05 crc kubenswrapper[4793]: E0217 20:30:05.167589 4793 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" containerName="nova-scheduler-scheduler" Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.282048 4793 generic.go:334] "Generic (PLEG): container finished" podID="55f15754-c913-412a-a53c-ee7eebad65a0" containerID="30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061" exitCode=143 Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.282118 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55f15754-c913-412a-a53c-ee7eebad65a0","Type":"ContainerDied","Data":"30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061"} Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.286953 4793 generic.go:334] "Generic (PLEG): container finished" podID="0e9f812f-0a83-48bb-9048-97706114af57" containerID="4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807" exitCode=143 Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.286989 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e9f812f-0a83-48bb-9048-97706114af57","Type":"ContainerDied","Data":"4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807"} Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.848672 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.997866 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55f15754-c913-412a-a53c-ee7eebad65a0-logs\") pod \"55f15754-c913-412a-a53c-ee7eebad65a0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.997941 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrq6m\" (UniqueName: \"kubernetes.io/projected/55f15754-c913-412a-a53c-ee7eebad65a0-kube-api-access-rrq6m\") pod \"55f15754-c913-412a-a53c-ee7eebad65a0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.997977 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-config-data\") pod \"55f15754-c913-412a-a53c-ee7eebad65a0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.997997 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-internal-tls-certs\") pod \"55f15754-c913-412a-a53c-ee7eebad65a0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.998038 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-combined-ca-bundle\") pod \"55f15754-c913-412a-a53c-ee7eebad65a0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.998055 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-public-tls-certs\") pod \"55f15754-c913-412a-a53c-ee7eebad65a0\" (UID: \"55f15754-c913-412a-a53c-ee7eebad65a0\") " Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.998340 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f15754-c913-412a-a53c-ee7eebad65a0-logs" (OuterVolumeSpecName: "logs") pod "55f15754-c913-412a-a53c-ee7eebad65a0" (UID: "55f15754-c913-412a-a53c-ee7eebad65a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:30:05 crc kubenswrapper[4793]: I0217 20:30:05.998851 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55f15754-c913-412a-a53c-ee7eebad65a0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.004181 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f15754-c913-412a-a53c-ee7eebad65a0-kube-api-access-rrq6m" (OuterVolumeSpecName: "kube-api-access-rrq6m") pod "55f15754-c913-412a-a53c-ee7eebad65a0" (UID: "55f15754-c913-412a-a53c-ee7eebad65a0"). InnerVolumeSpecName "kube-api-access-rrq6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.029142 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55f15754-c913-412a-a53c-ee7eebad65a0" (UID: "55f15754-c913-412a-a53c-ee7eebad65a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.041953 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-config-data" (OuterVolumeSpecName: "config-data") pod "55f15754-c913-412a-a53c-ee7eebad65a0" (UID: "55f15754-c913-412a-a53c-ee7eebad65a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.051136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55f15754-c913-412a-a53c-ee7eebad65a0" (UID: "55f15754-c913-412a-a53c-ee7eebad65a0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.055199 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55f15754-c913-412a-a53c-ee7eebad65a0" (UID: "55f15754-c913-412a-a53c-ee7eebad65a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.100515 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrq6m\" (UniqueName: \"kubernetes.io/projected/55f15754-c913-412a-a53c-ee7eebad65a0-kube-api-access-rrq6m\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.100549 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.100559 4793 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.100567 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.100577 4793 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55f15754-c913-412a-a53c-ee7eebad65a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.136944 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.301147 4793 generic.go:334] "Generic (PLEG): container finished" podID="55f15754-c913-412a-a53c-ee7eebad65a0" containerID="cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e" exitCode=0 Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.301332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55f15754-c913-412a-a53c-ee7eebad65a0","Type":"ContainerDied","Data":"cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e"} Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.301393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55f15754-c913-412a-a53c-ee7eebad65a0","Type":"ContainerDied","Data":"971c7cd82b6dd95f13fbe0011588fb20ab6a664ae97f2d516c9799728cc0d53d"} Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.301423 4793 scope.go:117] "RemoveContainer" containerID="cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.301536 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.303560 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e9f812f-0a83-48bb-9048-97706114af57-logs\") pod \"0e9f812f-0a83-48bb-9048-97706114af57\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.303721 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-combined-ca-bundle\") pod \"0e9f812f-0a83-48bb-9048-97706114af57\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.303784 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bptcq\" (UniqueName: \"kubernetes.io/projected/0e9f812f-0a83-48bb-9048-97706114af57-kube-api-access-bptcq\") pod \"0e9f812f-0a83-48bb-9048-97706114af57\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.303860 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-nova-metadata-tls-certs\") pod \"0e9f812f-0a83-48bb-9048-97706114af57\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.303962 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-config-data\") pod \"0e9f812f-0a83-48bb-9048-97706114af57\" (UID: \"0e9f812f-0a83-48bb-9048-97706114af57\") " Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.304126 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9f812f-0a83-48bb-9048-97706114af57-logs" (OuterVolumeSpecName: "logs") pod "0e9f812f-0a83-48bb-9048-97706114af57" (UID: "0e9f812f-0a83-48bb-9048-97706114af57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.304585 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e9f812f-0a83-48bb-9048-97706114af57-logs\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.308081 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9f812f-0a83-48bb-9048-97706114af57-kube-api-access-bptcq" (OuterVolumeSpecName: "kube-api-access-bptcq") pod "0e9f812f-0a83-48bb-9048-97706114af57" (UID: "0e9f812f-0a83-48bb-9048-97706114af57"). InnerVolumeSpecName "kube-api-access-bptcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.308631 4793 generic.go:334] "Generic (PLEG): container finished" podID="0e9f812f-0a83-48bb-9048-97706114af57" containerID="18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e" exitCode=0 Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.308674 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e9f812f-0a83-48bb-9048-97706114af57","Type":"ContainerDied","Data":"18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e"} Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.308721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e9f812f-0a83-48bb-9048-97706114af57","Type":"ContainerDied","Data":"0bb71534e116159173514520818d8e1a3afb74d2928238ddbffbd3e0f20d62fa"} Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.308783 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.353827 4793 scope.go:117] "RemoveContainer" containerID="30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.354881 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.355046 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-config-data" (OuterVolumeSpecName: "config-data") pod "0e9f812f-0a83-48bb-9048-97706114af57" (UID: "0e9f812f-0a83-48bb-9048-97706114af57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.367837 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.398775 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.399333 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-api" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399358 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-api" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.399384 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c590626-10f6-4866-afb8-a765d8692f9f" containerName="nova-manage" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399398 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c590626-10f6-4866-afb8-a765d8692f9f" containerName="nova-manage" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.399429 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-log" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399440 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-log" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.399465 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-log" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399476 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-log" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.399498 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b6d4ed-8f06-4ae1-aea1-772d90636d7e" containerName="collect-profiles" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399510 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b6d4ed-8f06-4ae1-aea1-772d90636d7e" containerName="collect-profiles" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.399532 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-metadata" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399545 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-metadata" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.399988 4793 scope.go:117] "RemoveContainer" containerID="cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400049 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b6d4ed-8f06-4ae1-aea1-772d90636d7e" containerName="collect-profiles" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400072 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-log" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400102 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-metadata" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400129 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" containerName="nova-api-api" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400143 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c590626-10f6-4866-afb8-a765d8692f9f" containerName="nova-manage" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400202 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9f812f-0a83-48bb-9048-97706114af57" containerName="nova-metadata-log" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.400917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e9f812f-0a83-48bb-9048-97706114af57" (UID: "0e9f812f-0a83-48bb-9048-97706114af57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.400761 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e\": container with ID starting with cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e not found: ID does not exist" containerID="cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.401198 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e"} err="failed to get container status \"cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e\": rpc error: code = NotFound desc = could not find container \"cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e\": container with ID starting with cf5d5b335f4858db25148bd7606057ef6673290b02efd27a7176b323565da16e not found: ID does not exist" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.401241 4793 scope.go:117] "RemoveContainer" containerID="30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.403036 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061\": container with ID starting with 30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061 not found: ID does not exist" containerID="30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.403072 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061"} err="failed to get container status \"30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061\": rpc error: code = NotFound desc = could not find container \"30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061\": container with ID starting with 30e4436d39a5d3843f9affbfaf5b705552378c3339de51124826f5a6daa86061 not found: ID does not exist" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.403101 4793 scope.go:117] "RemoveContainer" containerID="18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.407059 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.407333 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.407388 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bptcq\" (UniqueName: \"kubernetes.io/projected/0e9f812f-0a83-48bb-9048-97706114af57-kube-api-access-bptcq\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.407403 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.408092 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0e9f812f-0a83-48bb-9048-97706114af57" (UID: "0e9f812f-0a83-48bb-9048-97706114af57"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.412362 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.412797 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.412944 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.417366 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.435059 4793 scope.go:117] "RemoveContainer" containerID="4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.457136 4793 scope.go:117] "RemoveContainer" containerID="18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.459362 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e\": container with ID starting with 18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e not found: ID does not exist" containerID="18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.459417 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e"} err="failed to get container status \"18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e\": rpc error: code = NotFound desc = could not find container \"18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e\": container with ID starting with 18f786dafceb9cb0d27161bb922172dc160918808b267228db38b2b181ffb78e not found: ID does not exist" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.459461 4793 scope.go:117] "RemoveContainer" containerID="4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807" Feb 17 20:30:06 crc kubenswrapper[4793]: E0217 20:30:06.459827 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807\": container with ID starting with 4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807 not found: ID does not exist" containerID="4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.459849 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807"} err="failed to get container status \"4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807\": rpc error: code = NotFound desc = could not find container \"4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807\": container with ID starting with 4ad8a8b35545febc34f2e63e9be137cd48171692dd02140f400d3f9d0feb1807 not found: ID does not exist" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509074 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509148 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-logs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509171 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509212 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mj5\" (UniqueName: \"kubernetes.io/projected/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-kube-api-access-w9mj5\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509230 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-config-data\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509350 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-public-tls-certs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.509618 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9f812f-0a83-48bb-9048-97706114af57-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.610863 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.610959 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-logs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.611003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.611063 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9mj5\" (UniqueName: \"kubernetes.io/projected/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-kube-api-access-w9mj5\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.611089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-config-data\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.611114 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-public-tls-certs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.613076 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-logs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.615178 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-public-tls-certs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.616119 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-config-data\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.619154 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.620893 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.640591 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9mj5\" (UniqueName: \"kubernetes.io/projected/e39ddba3-31ef-4f6e-90d5-67dd54124ba0-kube-api-access-w9mj5\") pod \"nova-api-0\" (UID: \"e39ddba3-31ef-4f6e-90d5-67dd54124ba0\") " pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.643858 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.652269 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.677230 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.679318 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.681213 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.682545 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.688641 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.731225 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.813922 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7jx\" (UniqueName: \"kubernetes.io/projected/99e9323f-09b5-4ee0-a27d-0698f99071bc-kube-api-access-9b7jx\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.813991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.814158 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e9323f-09b5-4ee0-a27d-0698f99071bc-logs\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.814429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.814465 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-config-data\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.916146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e9323f-09b5-4ee0-a27d-0698f99071bc-logs\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.917289 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.917334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-config-data\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.917525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7jx\" (UniqueName: \"kubernetes.io/projected/99e9323f-09b5-4ee0-a27d-0698f99071bc-kube-api-access-9b7jx\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.917610 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.917663 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e9323f-09b5-4ee0-a27d-0698f99071bc-logs\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.923006 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.923425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-config-data\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.924978 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e9323f-09b5-4ee0-a27d-0698f99071bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:06 crc kubenswrapper[4793]: I0217 20:30:06.938956 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7jx\" (UniqueName: \"kubernetes.io/projected/99e9323f-09b5-4ee0-a27d-0698f99071bc-kube-api-access-9b7jx\") pod \"nova-metadata-0\" (UID: \"99e9323f-09b5-4ee0-a27d-0698f99071bc\") " pod="openstack/nova-metadata-0" Feb 17 20:30:07 crc kubenswrapper[4793]: I0217 20:30:07.103557 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 20:30:07 crc kubenswrapper[4793]: I0217 20:30:07.199183 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 20:30:07 crc kubenswrapper[4793]: I0217 20:30:07.335362 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e39ddba3-31ef-4f6e-90d5-67dd54124ba0","Type":"ContainerStarted","Data":"9d422fd70c3d3260fdccb45df6fedcc7eb64f0da703be08890e920063f32483a"} Feb 17 20:30:07 crc kubenswrapper[4793]: I0217 20:30:07.552616 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9f812f-0a83-48bb-9048-97706114af57" path="/var/lib/kubelet/pods/0e9f812f-0a83-48bb-9048-97706114af57/volumes" Feb 17 20:30:07 crc kubenswrapper[4793]: I0217 20:30:07.553292 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f15754-c913-412a-a53c-ee7eebad65a0" path="/var/lib/kubelet/pods/55f15754-c913-412a-a53c-ee7eebad65a0/volumes" Feb 17 20:30:07 crc kubenswrapper[4793]: W0217 20:30:07.580021 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99e9323f_09b5_4ee0_a27d_0698f99071bc.slice/crio-35d7157d47cd5b3c40ba567c526706a2bc9c14da1c9f71f82e685cd573f9529a WatchSource:0}: Error finding container 35d7157d47cd5b3c40ba567c526706a2bc9c14da1c9f71f82e685cd573f9529a: Status 404 returned error can't find the container with id 35d7157d47cd5b3c40ba567c526706a2bc9c14da1c9f71f82e685cd573f9529a Feb 17 20:30:07 crc kubenswrapper[4793]: I0217 20:30:07.583338 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.347668 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e39ddba3-31ef-4f6e-90d5-67dd54124ba0","Type":"ContainerStarted","Data":"0953ae6337f251d3c94c307a0e3f715cd530cad364d96e690edd5f51657a4b9e"} Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.348150 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e39ddba3-31ef-4f6e-90d5-67dd54124ba0","Type":"ContainerStarted","Data":"886a69b8b5e5eb2bd6c48430248953e36f63366893a2c2dffc101439085c8fd9"} Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.349995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99e9323f-09b5-4ee0-a27d-0698f99071bc","Type":"ContainerStarted","Data":"ce2ffbda6971e8c51b8fa7c3790e525200ac4a8e9ee49d74ef4c5431b1feaa60"} Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.350037 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99e9323f-09b5-4ee0-a27d-0698f99071bc","Type":"ContainerStarted","Data":"91312df5a6861f36a8efdeba2e5debccd012bb881fe397f59fbf9cd64d775b8d"} Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.350047 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"99e9323f-09b5-4ee0-a27d-0698f99071bc","Type":"ContainerStarted","Data":"35d7157d47cd5b3c40ba567c526706a2bc9c14da1c9f71f82e685cd573f9529a"} Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.374464 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.374440649 podStartE2EDuration="2.374440649s" podCreationTimestamp="2026-02-17 20:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:30:08.369992899 +0000 UTC m=+1283.661691220" watchObservedRunningTime="2026-02-17 20:30:08.374440649 +0000 UTC m=+1283.666138980" Feb 17 20:30:08 crc kubenswrapper[4793]: I0217 20:30:08.402950 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.402928716 podStartE2EDuration="2.402928716s" podCreationTimestamp="2026-02-17 20:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:30:08.392524388 +0000 UTC m=+1283.684222709" watchObservedRunningTime="2026-02-17 20:30:08.402928716 +0000 UTC m=+1283.694627027" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.361985 4793 generic.go:334] "Generic (PLEG): container finished" podID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" containerID="42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff" exitCode=0 Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.362020 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"454345ac-3d2b-41e1-bcb3-03c6d1e83e16","Type":"ContainerDied","Data":"42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff"} Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.591956 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.682117 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pgl6\" (UniqueName: \"kubernetes.io/projected/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-kube-api-access-4pgl6\") pod \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.682201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-config-data\") pod \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.682357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-combined-ca-bundle\") pod \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\" (UID: \"454345ac-3d2b-41e1-bcb3-03c6d1e83e16\") " Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.688651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-kube-api-access-4pgl6" (OuterVolumeSpecName: "kube-api-access-4pgl6") pod "454345ac-3d2b-41e1-bcb3-03c6d1e83e16" (UID: "454345ac-3d2b-41e1-bcb3-03c6d1e83e16"). InnerVolumeSpecName "kube-api-access-4pgl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.714735 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-config-data" (OuterVolumeSpecName: "config-data") pod "454345ac-3d2b-41e1-bcb3-03c6d1e83e16" (UID: "454345ac-3d2b-41e1-bcb3-03c6d1e83e16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.734329 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "454345ac-3d2b-41e1-bcb3-03c6d1e83e16" (UID: "454345ac-3d2b-41e1-bcb3-03c6d1e83e16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.784548 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.784605 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pgl6\" (UniqueName: \"kubernetes.io/projected/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-kube-api-access-4pgl6\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:09 crc kubenswrapper[4793]: I0217 20:30:09.784623 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454345ac-3d2b-41e1-bcb3-03c6d1e83e16-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.372131 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"454345ac-3d2b-41e1-bcb3-03c6d1e83e16","Type":"ContainerDied","Data":"8c1622b507b3be5a23c8a13ed0cb37de57dcdcfe9d83537c4fcdcf518864755f"} Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.372193 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.372248 4793 scope.go:117] "RemoveContainer" containerID="42fef2ddac784bb0cdace652df28c8954be655bc1c847cedbe03929051e00dff" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.415182 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.431997 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.444349 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:30:10 crc kubenswrapper[4793]: E0217 20:30:10.444839 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" containerName="nova-scheduler-scheduler" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.444865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" containerName="nova-scheduler-scheduler" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.445110 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" containerName="nova-scheduler-scheduler" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.445928 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.448348 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.473094 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.599032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwwfl\" (UniqueName: \"kubernetes.io/projected/45669327-0649-4fd2-a59f-0e31a4e3cf5a-kube-api-access-qwwfl\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.599746 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45669327-0649-4fd2-a59f-0e31a4e3cf5a-config-data\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.599824 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45669327-0649-4fd2-a59f-0e31a4e3cf5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.701067 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45669327-0649-4fd2-a59f-0e31a4e3cf5a-config-data\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.701306 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45669327-0649-4fd2-a59f-0e31a4e3cf5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.701349 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwwfl\" (UniqueName: \"kubernetes.io/projected/45669327-0649-4fd2-a59f-0e31a4e3cf5a-kube-api-access-qwwfl\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.708375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45669327-0649-4fd2-a59f-0e31a4e3cf5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.718825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45669327-0649-4fd2-a59f-0e31a4e3cf5a-config-data\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.718968 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwwfl\" (UniqueName: \"kubernetes.io/projected/45669327-0649-4fd2-a59f-0e31a4e3cf5a-kube-api-access-qwwfl\") pod \"nova-scheduler-0\" (UID: \"45669327-0649-4fd2-a59f-0e31a4e3cf5a\") " pod="openstack/nova-scheduler-0" Feb 17 20:30:10 crc kubenswrapper[4793]: I0217 20:30:10.780616 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 20:30:11 crc kubenswrapper[4793]: I0217 20:30:11.245170 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 20:30:11 crc kubenswrapper[4793]: I0217 20:30:11.384079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45669327-0649-4fd2-a59f-0e31a4e3cf5a","Type":"ContainerStarted","Data":"da353a0f163168630883b1bd6984ac57188474bddb48b58579536d67f5ccd610"} Feb 17 20:30:11 crc kubenswrapper[4793]: I0217 20:30:11.551888 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454345ac-3d2b-41e1-bcb3-03c6d1e83e16" path="/var/lib/kubelet/pods/454345ac-3d2b-41e1-bcb3-03c6d1e83e16/volumes" Feb 17 20:30:12 crc kubenswrapper[4793]: I0217 20:30:12.104390 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 20:30:12 crc kubenswrapper[4793]: I0217 20:30:12.104468 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 20:30:12 crc kubenswrapper[4793]: I0217 20:30:12.401982 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"45669327-0649-4fd2-a59f-0e31a4e3cf5a","Type":"ContainerStarted","Data":"7b83ed00219c10198367a490ca94bb366e04da9a5acbdd54fb2136895e1cd6c0"} Feb 17 20:30:12 crc kubenswrapper[4793]: I0217 20:30:12.445318 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4452826500000002 podStartE2EDuration="2.44528265s" podCreationTimestamp="2026-02-17 20:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 20:30:12.424176656 +0000 UTC m=+1287.715875027" watchObservedRunningTime="2026-02-17 20:30:12.44528265 +0000 UTC m=+1287.736981001" Feb 17 20:30:14 crc kubenswrapper[4793]: I0217 20:30:14.538571 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:30:14 crc kubenswrapper[4793]: E0217 20:30:14.539948 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:30:15 crc kubenswrapper[4793]: I0217 20:30:15.781367 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 20:30:16 crc kubenswrapper[4793]: I0217 20:30:16.732598 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:30:16 crc kubenswrapper[4793]: I0217 20:30:16.733028 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 20:30:17 crc kubenswrapper[4793]: I0217 20:30:17.104623 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 20:30:17 crc kubenswrapper[4793]: I0217 20:30:17.104667 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 20:30:17 crc kubenswrapper[4793]: I0217 20:30:17.750654 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e39ddba3-31ef-4f6e-90d5-67dd54124ba0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:30:17 crc kubenswrapper[4793]: I0217 20:30:17.750983 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e39ddba3-31ef-4f6e-90d5-67dd54124ba0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:30:18 crc kubenswrapper[4793]: I0217 20:30:18.124922 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="99e9323f-09b5-4ee0-a27d-0698f99071bc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:30:18 crc kubenswrapper[4793]: I0217 20:30:18.124979 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="99e9323f-09b5-4ee0-a27d-0698f99071bc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 20:30:19 crc kubenswrapper[4793]: I0217 20:30:19.217968 4793 scope.go:117] "RemoveContainer" containerID="fc0800c50ca8794af0f0d8cfce473460a88bee4c5f9e66b0f64775dd31f87ef5" Feb 17 20:30:19 crc kubenswrapper[4793]: I0217 20:30:19.268624 4793 scope.go:117] "RemoveContainer" containerID="12fb1677e168173631f4cf9d4b1a1c0abbbd620c3150e5cef42b0078aff0c559" Feb 17 20:30:19 crc kubenswrapper[4793]: I0217 20:30:19.305008 4793 scope.go:117] "RemoveContainer" containerID="7b0a82c4ea6bf6b4e30293d6a5ec051ea6dcd81d63abb3b22a91ce0c09670c12" Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.102009 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.102425 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.102487 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.103422 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b27f6badb54e605b7fa30990b1315a8e69fe8085dab295339fcc3730365d0d4"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.103512 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://0b27f6badb54e605b7fa30990b1315a8e69fe8085dab295339fcc3730365d0d4" gracePeriod=600 Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.513266 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="0b27f6badb54e605b7fa30990b1315a8e69fe8085dab295339fcc3730365d0d4" exitCode=0 Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.513335 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"0b27f6badb54e605b7fa30990b1315a8e69fe8085dab295339fcc3730365d0d4"} Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.513387 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f"} Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.513408 4793 scope.go:117] "RemoveContainer" containerID="cc36425746166cdc0b95e87a928d74d18e09bb391b29761173d2714eb0234de5" Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.781209 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 20:30:20 crc kubenswrapper[4793]: I0217 20:30:20.811532 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 20:30:21 crc kubenswrapper[4793]: I0217 20:30:21.555435 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 20:30:22 crc kubenswrapper[4793]: I0217 20:30:22.483792 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 20:30:26 crc kubenswrapper[4793]: I0217 20:30:26.742610 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 20:30:26 crc kubenswrapper[4793]: I0217 20:30:26.743605 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 20:30:26 crc kubenswrapper[4793]: I0217 20:30:26.748737 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 20:30:26 crc kubenswrapper[4793]: I0217 20:30:26.757766 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.111322 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.112734 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.120879 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.539493 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.589163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.596578 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 20:30:27 crc kubenswrapper[4793]: I0217 20:30:27.609951 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 20:30:28 crc kubenswrapper[4793]: I0217 20:30:28.602797 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584"} Feb 17 20:30:30 crc kubenswrapper[4793]: I0217 20:30:30.626592 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" exitCode=1 Feb 17 20:30:30 crc kubenswrapper[4793]: I0217 20:30:30.626676 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584"} Feb 17 20:30:30 crc kubenswrapper[4793]: I0217 20:30:30.627133 4793 scope.go:117] "RemoveContainer" containerID="931e47d759f3adcee541e6fe68bc8ebef51e63ece70add63ea24cdb6c500de3e" Feb 17 20:30:30 crc kubenswrapper[4793]: I0217 20:30:30.627752 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:30:30 crc kubenswrapper[4793]: E0217 20:30:30.628020 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:30:31 crc kubenswrapper[4793]: I0217 20:30:31.962735 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:30:31 crc kubenswrapper[4793]: I0217 20:30:31.963043 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:30:31 crc kubenswrapper[4793]: I0217 20:30:31.963057 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:30:31 crc kubenswrapper[4793]: I0217 20:30:31.963068 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:30:31 crc kubenswrapper[4793]: I0217 20:30:31.963539 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:30:31 crc kubenswrapper[4793]: E0217 20:30:31.963922 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:30:46 crc kubenswrapper[4793]: I0217 20:30:46.539576 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:30:46 crc kubenswrapper[4793]: E0217 20:30:46.540620 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:31:01 crc kubenswrapper[4793]: I0217 20:31:01.538458 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:31:01 crc kubenswrapper[4793]: E0217 20:31:01.539267 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:31:14 crc kubenswrapper[4793]: I0217 20:31:14.538964 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:31:14 crc kubenswrapper[4793]: E0217 20:31:14.539753 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:31:28 crc kubenswrapper[4793]: I0217 20:31:28.539842 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:31:28 crc kubenswrapper[4793]: E0217 20:31:28.540922 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:31:40 crc kubenswrapper[4793]: I0217 20:31:40.538557 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:31:40 crc kubenswrapper[4793]: E0217 20:31:40.539544 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:31:54 crc kubenswrapper[4793]: I0217 20:31:54.538768 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:31:54 crc kubenswrapper[4793]: E0217 20:31:54.539521 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:32:06 crc kubenswrapper[4793]: I0217 20:32:06.538885 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:32:06 crc kubenswrapper[4793]: E0217 20:32:06.539792 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:32:18 crc kubenswrapper[4793]: I0217 20:32:18.539766 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:32:18 crc kubenswrapper[4793]: E0217 20:32:18.540969 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:32:19 crc kubenswrapper[4793]: I0217 20:32:19.561090 4793 scope.go:117] "RemoveContainer" containerID="942d386456522ad7e41c6fb5683786d9090a3e85f41b3da169efc8ee8ba30c57" Feb 17 20:32:19 crc kubenswrapper[4793]: I0217 20:32:19.624359 4793 scope.go:117] "RemoveContainer" containerID="b88d0511f638204c93f87359a8ec83550d26331baa57ad7f6b3092e8d487d2b3" Feb 17 20:32:19 crc kubenswrapper[4793]: I0217 20:32:19.656950 4793 scope.go:117] "RemoveContainer" containerID="2335d7716b13f7b157210fe0281a7ffc7ec1484fb983b9f4cf3bab2645d5ae73" Feb 17 20:32:20 crc kubenswrapper[4793]: I0217 20:32:20.102400 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:32:20 crc kubenswrapper[4793]: I0217 20:32:20.102477 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:32:31 crc kubenswrapper[4793]: I0217 20:32:31.539635 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:32:31 crc kubenswrapper[4793]: E0217 20:32:31.540783 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:32:43 crc kubenswrapper[4793]: I0217 20:32:43.539979 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:32:43 crc kubenswrapper[4793]: E0217 20:32:43.541138 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:32:50 crc kubenswrapper[4793]: I0217 20:32:50.101840 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:32:50 crc kubenswrapper[4793]: I0217 20:32:50.102560 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:32:56 crc kubenswrapper[4793]: I0217 20:32:56.538728 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:32:56 crc kubenswrapper[4793]: E0217 20:32:56.539651 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:33:11 crc kubenswrapper[4793]: I0217 20:33:11.538977 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:33:12 crc kubenswrapper[4793]: I0217 20:33:12.486512 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299"} Feb 17 20:33:14 crc kubenswrapper[4793]: I0217 20:33:14.511562 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" exitCode=1 Feb 17 20:33:14 crc kubenswrapper[4793]: I0217 20:33:14.511622 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299"} Feb 17 20:33:14 crc kubenswrapper[4793]: I0217 20:33:14.511669 4793 scope.go:117] "RemoveContainer" containerID="a8008df1b9ea184b60bf74fc15ec4867ee07d5e09128cb2efe8c424e3088c584" Feb 17 20:33:14 crc kubenswrapper[4793]: I0217 20:33:14.512438 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:33:14 crc kubenswrapper[4793]: E0217 20:33:14.512885 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:33:16 crc kubenswrapper[4793]: I0217 20:33:16.963301 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:33:16 crc kubenswrapper[4793]: I0217 20:33:16.964342 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:33:16 crc kubenswrapper[4793]: E0217 20:33:16.964606 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:33:19 crc kubenswrapper[4793]: I0217 20:33:19.784615 4793 scope.go:117] "RemoveContainer" containerID="cbf9b0cb7ef08745f99cc490d56cd45134d57c061ca738f3e84c09bab9bb0a81" Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.102676 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.102843 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.102965 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.105071 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.105221 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" gracePeriod=600 Feb 17 20:33:20 crc kubenswrapper[4793]: E0217 20:33:20.234209 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.585683 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" exitCode=0 Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.585727 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f"} Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.586248 4793 scope.go:117] "RemoveContainer" containerID="0b27f6badb54e605b7fa30990b1315a8e69fe8085dab295339fcc3730365d0d4" Feb 17 20:33:20 crc kubenswrapper[4793]: I0217 20:33:20.587235 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:33:20 crc kubenswrapper[4793]: E0217 20:33:20.587908 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:33:21 crc kubenswrapper[4793]: I0217 20:33:21.963074 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:33:21 crc kubenswrapper[4793]: I0217 20:33:21.963144 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:33:21 crc kubenswrapper[4793]: I0217 20:33:21.963163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:33:21 crc kubenswrapper[4793]: I0217 20:33:21.964149 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:33:21 crc kubenswrapper[4793]: E0217 20:33:21.964621 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:33:33 crc kubenswrapper[4793]: I0217 20:33:33.539467 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:33:33 crc kubenswrapper[4793]: E0217 20:33:33.542142 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:33:34 crc kubenswrapper[4793]: I0217 20:33:34.539070 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:33:34 crc kubenswrapper[4793]: E0217 20:33:34.539940 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:33:47 crc kubenswrapper[4793]: I0217 20:33:47.539301 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:33:47 crc kubenswrapper[4793]: E0217 20:33:47.540310 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:33:48 crc kubenswrapper[4793]: I0217 20:33:48.538760 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:33:48 crc kubenswrapper[4793]: E0217 20:33:48.539288 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:33:59 crc kubenswrapper[4793]: I0217 20:33:59.538882 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:33:59 crc kubenswrapper[4793]: E0217 20:33:59.539718 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:34:01 crc kubenswrapper[4793]: I0217 20:34:01.538573 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:34:01 crc kubenswrapper[4793]: E0217 20:34:01.539218 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.774810 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqsnt"] Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.778092 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.808044 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqsnt"] Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.897102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-utilities\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.897186 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9skq\" (UniqueName: \"kubernetes.io/projected/d4dc44e3-7655-42be-9769-76016b25afb3-kube-api-access-b9skq\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.897448 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-catalog-content\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.999365 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-utilities\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.999428 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9skq\" (UniqueName: \"kubernetes.io/projected/d4dc44e3-7655-42be-9769-76016b25afb3-kube-api-access-b9skq\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:08 crc kubenswrapper[4793]: I0217 20:34:08.999484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-catalog-content\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:09 crc kubenswrapper[4793]: I0217 20:34:09.000188 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-catalog-content\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:09 crc kubenswrapper[4793]: I0217 20:34:09.000895 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-utilities\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:09 crc kubenswrapper[4793]: I0217 20:34:09.020460 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9skq\" (UniqueName: \"kubernetes.io/projected/d4dc44e3-7655-42be-9769-76016b25afb3-kube-api-access-b9skq\") pod \"redhat-operators-xqsnt\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:09 crc kubenswrapper[4793]: I0217 20:34:09.118370 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:09 crc kubenswrapper[4793]: I0217 20:34:09.623435 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqsnt"] Feb 17 20:34:09 crc kubenswrapper[4793]: W0217 20:34:09.629399 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4dc44e3_7655_42be_9769_76016b25afb3.slice/crio-4f0a3a7f9f291ca4023ddaff847b52efcdae131efee104fdd2292f28cf14408b WatchSource:0}: Error finding container 4f0a3a7f9f291ca4023ddaff847b52efcdae131efee104fdd2292f28cf14408b: Status 404 returned error can't find the container with id 4f0a3a7f9f291ca4023ddaff847b52efcdae131efee104fdd2292f28cf14408b Feb 17 20:34:10 crc kubenswrapper[4793]: I0217 20:34:10.163846 4793 generic.go:334] "Generic (PLEG): container finished" podID="d4dc44e3-7655-42be-9769-76016b25afb3" containerID="8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b" exitCode=0 Feb 17 20:34:10 crc kubenswrapper[4793]: I0217 20:34:10.163935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerDied","Data":"8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b"} Feb 17 20:34:10 crc kubenswrapper[4793]: I0217 20:34:10.164121 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerStarted","Data":"4f0a3a7f9f291ca4023ddaff847b52efcdae131efee104fdd2292f28cf14408b"} Feb 17 20:34:10 crc kubenswrapper[4793]: I0217 20:34:10.166216 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:34:12 crc kubenswrapper[4793]: I0217 20:34:12.183779 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerStarted","Data":"5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51"} Feb 17 20:34:14 crc kubenswrapper[4793]: I0217 20:34:14.206441 4793 generic.go:334] "Generic (PLEG): container finished" podID="d4dc44e3-7655-42be-9769-76016b25afb3" containerID="5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51" exitCode=0 Feb 17 20:34:14 crc kubenswrapper[4793]: I0217 20:34:14.206640 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerDied","Data":"5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51"} Feb 17 20:34:14 crc kubenswrapper[4793]: I0217 20:34:14.539824 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:34:14 crc kubenswrapper[4793]: E0217 20:34:14.540161 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:34:15 crc kubenswrapper[4793]: I0217 20:34:15.221948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerStarted","Data":"3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db"} Feb 17 20:34:15 crc kubenswrapper[4793]: I0217 20:34:15.245744 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqsnt" podStartSLOduration=2.7530371259999997 podStartE2EDuration="7.245729757s" podCreationTimestamp="2026-02-17 20:34:08 +0000 UTC" firstStartedPulling="2026-02-17 20:34:10.166013431 +0000 UTC m=+1525.457711742" lastFinishedPulling="2026-02-17 20:34:14.658706062 +0000 UTC m=+1529.950404373" observedRunningTime="2026-02-17 20:34:15.243040362 +0000 UTC m=+1530.534738663" watchObservedRunningTime="2026-02-17 20:34:15.245729757 +0000 UTC m=+1530.537428068" Feb 17 20:34:15 crc kubenswrapper[4793]: I0217 20:34:15.545833 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:34:15 crc kubenswrapper[4793]: E0217 20:34:15.546142 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:34:19 crc kubenswrapper[4793]: I0217 20:34:19.119446 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:19 crc kubenswrapper[4793]: I0217 20:34:19.120513 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:19 crc kubenswrapper[4793]: I0217 20:34:19.889785 4793 scope.go:117] "RemoveContainer" containerID="25d7ecb3ef45a366adaa06b2f009b18b477a7e84dbc6fa2a4add9307c6df357d" Feb 17 20:34:19 crc kubenswrapper[4793]: I0217 20:34:19.918123 4793 scope.go:117] "RemoveContainer" containerID="1cc98e8700a103d9d9aa819172db439742bc981497a7d053695603c9b86fcb9f" Feb 17 20:34:19 crc kubenswrapper[4793]: I0217 20:34:19.947229 4793 scope.go:117] "RemoveContainer" containerID="ff8fe10db732c8397a86719c045ee5a5f862a92648c34444e80104140d266fb6" Feb 17 20:34:19 crc kubenswrapper[4793]: I0217 20:34:19.971832 4793 scope.go:117] "RemoveContainer" containerID="871aa8f1f17090de9e41cda325e6139ae24c7f96acd211bd51fe5596f79a88b7" Feb 17 20:34:20 crc kubenswrapper[4793]: I0217 20:34:20.162490 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xqsnt" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="registry-server" probeResult="failure" output=< Feb 17 20:34:20 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:34:20 crc kubenswrapper[4793]: > Feb 17 20:34:27 crc kubenswrapper[4793]: I0217 20:34:27.538730 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:34:27 crc kubenswrapper[4793]: E0217 20:34:27.540716 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:34:29 crc kubenswrapper[4793]: I0217 20:34:29.539962 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:34:29 crc kubenswrapper[4793]: E0217 20:34:29.540671 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:34:30 crc kubenswrapper[4793]: I0217 20:34:30.163355 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xqsnt" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="registry-server" probeResult="failure" output=< Feb 17 20:34:30 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:34:30 crc kubenswrapper[4793]: > Feb 17 20:34:39 crc kubenswrapper[4793]: I0217 20:34:39.205564 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:39 crc kubenswrapper[4793]: I0217 20:34:39.278460 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:40 crc kubenswrapper[4793]: I0217 20:34:40.395193 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqsnt"] Feb 17 20:34:40 crc kubenswrapper[4793]: I0217 20:34:40.475971 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqsnt" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="registry-server" containerID="cri-o://3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db" gracePeriod=2 Feb 17 20:34:40 crc kubenswrapper[4793]: I0217 20:34:40.539856 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:34:40 crc kubenswrapper[4793]: E0217 20:34:40.540090 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:34:40 crc kubenswrapper[4793]: I0217 20:34:40.542261 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:34:40 crc kubenswrapper[4793]: E0217 20:34:40.542588 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.016861 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.186204 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9skq\" (UniqueName: \"kubernetes.io/projected/d4dc44e3-7655-42be-9769-76016b25afb3-kube-api-access-b9skq\") pod \"d4dc44e3-7655-42be-9769-76016b25afb3\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.186452 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-catalog-content\") pod \"d4dc44e3-7655-42be-9769-76016b25afb3\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.186511 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-utilities\") pod \"d4dc44e3-7655-42be-9769-76016b25afb3\" (UID: \"d4dc44e3-7655-42be-9769-76016b25afb3\") " Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.187447 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-utilities" (OuterVolumeSpecName: "utilities") pod "d4dc44e3-7655-42be-9769-76016b25afb3" (UID: "d4dc44e3-7655-42be-9769-76016b25afb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.193354 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4dc44e3-7655-42be-9769-76016b25afb3-kube-api-access-b9skq" (OuterVolumeSpecName: "kube-api-access-b9skq") pod "d4dc44e3-7655-42be-9769-76016b25afb3" (UID: "d4dc44e3-7655-42be-9769-76016b25afb3"). InnerVolumeSpecName "kube-api-access-b9skq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.289026 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.289065 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9skq\" (UniqueName: \"kubernetes.io/projected/d4dc44e3-7655-42be-9769-76016b25afb3-kube-api-access-b9skq\") on node \"crc\" DevicePath \"\"" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.323136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4dc44e3-7655-42be-9769-76016b25afb3" (UID: "d4dc44e3-7655-42be-9769-76016b25afb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.391621 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4dc44e3-7655-42be-9769-76016b25afb3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.487716 4793 generic.go:334] "Generic (PLEG): container finished" podID="d4dc44e3-7655-42be-9769-76016b25afb3" containerID="3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db" exitCode=0 Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.487768 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerDied","Data":"3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db"} Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.487800 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqsnt" event={"ID":"d4dc44e3-7655-42be-9769-76016b25afb3","Type":"ContainerDied","Data":"4f0a3a7f9f291ca4023ddaff847b52efcdae131efee104fdd2292f28cf14408b"} Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.487822 4793 scope.go:117] "RemoveContainer" containerID="3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.487864 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqsnt" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.523127 4793 scope.go:117] "RemoveContainer" containerID="5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.532550 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqsnt"] Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.549984 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqsnt"] Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.581480 4793 scope.go:117] "RemoveContainer" containerID="8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.608993 4793 scope.go:117] "RemoveContainer" containerID="3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db" Feb 17 20:34:41 crc kubenswrapper[4793]: E0217 20:34:41.609535 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db\": container with ID starting with 3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db not found: ID does not exist" containerID="3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.609566 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db"} err="failed to get container status \"3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db\": rpc error: code = NotFound desc = could not find container \"3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db\": container with ID starting with 3fa54be1e30e1abc2db4b117fa6feff067d63fc9bd1d28b179f7da9b3a3524db not found: ID does not exist" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.609585 4793 scope.go:117] "RemoveContainer" containerID="5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51" Feb 17 20:34:41 crc kubenswrapper[4793]: E0217 20:34:41.610761 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51\": container with ID starting with 5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51 not found: ID does not exist" containerID="5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.610779 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51"} err="failed to get container status \"5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51\": rpc error: code = NotFound desc = could not find container \"5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51\": container with ID starting with 5311dc2ad5c9c68a9066eaa04f6063597f53c4a1e7cdf9ab25b144d5ca5a7b51 not found: ID does not exist" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.610791 4793 scope.go:117] "RemoveContainer" containerID="8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b" Feb 17 20:34:41 crc kubenswrapper[4793]: E0217 20:34:41.613585 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b\": container with ID starting with 8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b not found: ID does not exist" containerID="8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.613601 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b"} err="failed to get container status \"8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b\": rpc error: code = NotFound desc = could not find container \"8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b\": container with ID starting with 8780ac42942683e9a182ff0376fcc9bb46f4d041a34900e606bbc5c4e021b05b not found: ID does not exist" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.769389 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xr8gd"] Feb 17 20:34:41 crc kubenswrapper[4793]: E0217 20:34:41.769970 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="registry-server" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.769986 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="registry-server" Feb 17 20:34:41 crc kubenswrapper[4793]: E0217 20:34:41.770004 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="extract-utilities" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.770010 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="extract-utilities" Feb 17 20:34:41 crc kubenswrapper[4793]: E0217 20:34:41.770021 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="extract-content" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.770029 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="extract-content" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.770218 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" containerName="registry-server" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.771585 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.784318 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xr8gd"] Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.900005 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-catalog-content\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.900199 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfd5c\" (UniqueName: \"kubernetes.io/projected/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-kube-api-access-jfd5c\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:41 crc kubenswrapper[4793]: I0217 20:34:41.900550 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-utilities\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.002711 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-utilities\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.003007 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-catalog-content\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.003111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfd5c\" (UniqueName: \"kubernetes.io/projected/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-kube-api-access-jfd5c\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.003280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-utilities\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.003303 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-catalog-content\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.027537 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfd5c\" (UniqueName: \"kubernetes.io/projected/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-kube-api-access-jfd5c\") pod \"community-operators-xr8gd\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.087618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:42 crc kubenswrapper[4793]: I0217 20:34:42.584315 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xr8gd"] Feb 17 20:34:43 crc kubenswrapper[4793]: I0217 20:34:43.513624 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerID="0a38587431cd828d970839f503a46e45d8472c6428a39cf2f0cfc0b906febb59" exitCode=0 Feb 17 20:34:43 crc kubenswrapper[4793]: I0217 20:34:43.514029 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerDied","Data":"0a38587431cd828d970839f503a46e45d8472c6428a39cf2f0cfc0b906febb59"} Feb 17 20:34:43 crc kubenswrapper[4793]: I0217 20:34:43.514079 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerStarted","Data":"3e376b606d2e0b7ac43250d5be7df3c28ce70c3a5d0584c008b03939c2d3c83a"} Feb 17 20:34:43 crc kubenswrapper[4793]: I0217 20:34:43.568225 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4dc44e3-7655-42be-9769-76016b25afb3" path="/var/lib/kubelet/pods/d4dc44e3-7655-42be-9769-76016b25afb3/volumes" Feb 17 20:34:44 crc kubenswrapper[4793]: I0217 20:34:44.523921 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerStarted","Data":"3cd6de01f7ed915be93bca06f2ea807744470325c280dccf88f19bed76e49d92"} Feb 17 20:34:45 crc kubenswrapper[4793]: I0217 20:34:45.534782 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerID="3cd6de01f7ed915be93bca06f2ea807744470325c280dccf88f19bed76e49d92" exitCode=0 Feb 17 20:34:45 crc kubenswrapper[4793]: I0217 20:34:45.534844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerDied","Data":"3cd6de01f7ed915be93bca06f2ea807744470325c280dccf88f19bed76e49d92"} Feb 17 20:34:46 crc kubenswrapper[4793]: I0217 20:34:46.548861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerStarted","Data":"e10e56187a927a2cbc6047d7f70d8fe5a22e6ab9adb70d520b955743efdaf099"} Feb 17 20:34:46 crc kubenswrapper[4793]: I0217 20:34:46.572459 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xr8gd" podStartSLOduration=3.06163316 podStartE2EDuration="5.572439107s" podCreationTimestamp="2026-02-17 20:34:41 +0000 UTC" firstStartedPulling="2026-02-17 20:34:43.516870773 +0000 UTC m=+1558.808569124" lastFinishedPulling="2026-02-17 20:34:46.02767676 +0000 UTC m=+1561.319375071" observedRunningTime="2026-02-17 20:34:46.563236555 +0000 UTC m=+1561.854934876" watchObservedRunningTime="2026-02-17 20:34:46.572439107 +0000 UTC m=+1561.864137418" Feb 17 20:34:52 crc kubenswrapper[4793]: I0217 20:34:52.087780 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:52 crc kubenswrapper[4793]: I0217 20:34:52.088416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:52 crc kubenswrapper[4793]: I0217 20:34:52.151037 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:52 crc kubenswrapper[4793]: I0217 20:34:52.758297 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:52 crc kubenswrapper[4793]: I0217 20:34:52.816632 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xr8gd"] Feb 17 20:34:53 crc kubenswrapper[4793]: I0217 20:34:53.539039 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:34:53 crc kubenswrapper[4793]: E0217 20:34:53.539724 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:34:54 crc kubenswrapper[4793]: I0217 20:34:54.631576 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xr8gd" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="registry-server" containerID="cri-o://e10e56187a927a2cbc6047d7f70d8fe5a22e6ab9adb70d520b955743efdaf099" gracePeriod=2 Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.547201 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:34:55 crc kubenswrapper[4793]: E0217 20:34:55.548350 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.646144 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerID="e10e56187a927a2cbc6047d7f70d8fe5a22e6ab9adb70d520b955743efdaf099" exitCode=0 Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.646216 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerDied","Data":"e10e56187a927a2cbc6047d7f70d8fe5a22e6ab9adb70d520b955743efdaf099"} Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.646272 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xr8gd" event={"ID":"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a","Type":"ContainerDied","Data":"3e376b606d2e0b7ac43250d5be7df3c28ce70c3a5d0584c008b03939c2d3c83a"} Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.646287 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e376b606d2e0b7ac43250d5be7df3c28ce70c3a5d0584c008b03939c2d3c83a" Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.722851 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.917948 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-utilities\") pod \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.918069 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfd5c\" (UniqueName: \"kubernetes.io/projected/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-kube-api-access-jfd5c\") pod \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.918162 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-catalog-content\") pod \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\" (UID: \"7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a\") " Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.918956 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-utilities" (OuterVolumeSpecName: "utilities") pod "7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" (UID: "7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.932821 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-kube-api-access-jfd5c" (OuterVolumeSpecName: "kube-api-access-jfd5c") pod "7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" (UID: "7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a"). InnerVolumeSpecName "kube-api-access-jfd5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:34:55 crc kubenswrapper[4793]: I0217 20:34:55.969413 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" (UID: "7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:34:56 crc kubenswrapper[4793]: I0217 20:34:56.020770 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:34:56 crc kubenswrapper[4793]: I0217 20:34:56.020825 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:34:56 crc kubenswrapper[4793]: I0217 20:34:56.020845 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfd5c\" (UniqueName: \"kubernetes.io/projected/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a-kube-api-access-jfd5c\") on node \"crc\" DevicePath \"\"" Feb 17 20:34:56 crc kubenswrapper[4793]: I0217 20:34:56.663516 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xr8gd" Feb 17 20:34:56 crc kubenswrapper[4793]: I0217 20:34:56.728886 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xr8gd"] Feb 17 20:34:56 crc kubenswrapper[4793]: I0217 20:34:56.746799 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xr8gd"] Feb 17 20:34:57 crc kubenswrapper[4793]: I0217 20:34:57.555114 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" path="/var/lib/kubelet/pods/7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a/volumes" Feb 17 20:35:04 crc kubenswrapper[4793]: I0217 20:35:04.539020 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:35:04 crc kubenswrapper[4793]: E0217 20:35:04.539679 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:35:10 crc kubenswrapper[4793]: I0217 20:35:10.539245 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:35:10 crc kubenswrapper[4793]: E0217 20:35:10.540388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:35:19 crc kubenswrapper[4793]: I0217 20:35:19.539671 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:35:19 crc kubenswrapper[4793]: E0217 20:35:19.541126 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:35:20 crc kubenswrapper[4793]: I0217 20:35:20.037907 4793 scope.go:117] "RemoveContainer" containerID="f9a4920bf60fcc401d551336b9fb3d17f20e49199377562f74992b26e95b66da" Feb 17 20:35:20 crc kubenswrapper[4793]: I0217 20:35:20.068756 4793 scope.go:117] "RemoveContainer" containerID="a48964a866b21be64204b1cc09b433db05bc8630b3c53114f564d2fb9f52011c" Feb 17 20:35:20 crc kubenswrapper[4793]: I0217 20:35:20.100985 4793 scope.go:117] "RemoveContainer" containerID="4e1f03a2b47c21712b26f32e71922d2ae0c87e536b4ee6a9db714eeb0c27049b" Feb 17 20:35:20 crc kubenswrapper[4793]: I0217 20:35:20.135673 4793 scope.go:117] "RemoveContainer" containerID="bbea6c983542a633f05b4da44d62ff2358fd2033212474a5abb090bb6f305449" Feb 17 20:35:20 crc kubenswrapper[4793]: I0217 20:35:20.166956 4793 scope.go:117] "RemoveContainer" containerID="2c7d7ae24923f07174e4419580d846cf81882a749515366cc9b4117200f25f95" Feb 17 20:35:21 crc kubenswrapper[4793]: I0217 20:35:21.538615 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:35:21 crc kubenswrapper[4793]: E0217 20:35:21.539165 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:35:33 crc kubenswrapper[4793]: I0217 20:35:33.538628 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:35:33 crc kubenswrapper[4793]: E0217 20:35:33.539434 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:35:35 crc kubenswrapper[4793]: I0217 20:35:35.546801 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:35:35 crc kubenswrapper[4793]: E0217 20:35:35.547473 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.892232 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjhd5"] Feb 17 20:35:38 crc kubenswrapper[4793]: E0217 20:35:38.893399 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="extract-content" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.893431 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="extract-content" Feb 17 20:35:38 crc kubenswrapper[4793]: E0217 20:35:38.893530 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="registry-server" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.893550 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="registry-server" Feb 17 20:35:38 crc kubenswrapper[4793]: E0217 20:35:38.893606 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="extract-utilities" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.893623 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="extract-utilities" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.894164 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e64fe70-bc8c-4104-8ae8-6a42d3d8a67a" containerName="registry-server" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.897618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.908986 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjhd5"] Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.954774 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-catalog-content\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.955078 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-utilities\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:38 crc kubenswrapper[4793]: I0217 20:35:38.955121 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9sw\" (UniqueName: \"kubernetes.io/projected/4c648b90-a27b-44f1-a927-a44649e5d14f-kube-api-access-6b9sw\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.056988 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-catalog-content\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.057054 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-utilities\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.057087 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9sw\" (UniqueName: \"kubernetes.io/projected/4c648b90-a27b-44f1-a927-a44649e5d14f-kube-api-access-6b9sw\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.057673 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-catalog-content\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.057760 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-utilities\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.084542 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9sw\" (UniqueName: \"kubernetes.io/projected/4c648b90-a27b-44f1-a927-a44649e5d14f-kube-api-access-6b9sw\") pod \"certified-operators-jjhd5\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.235764 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:39 crc kubenswrapper[4793]: I0217 20:35:39.770617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjhd5"] Feb 17 20:35:40 crc kubenswrapper[4793]: I0217 20:35:40.191667 4793 generic.go:334] "Generic (PLEG): container finished" podID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerID="3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d" exitCode=0 Feb 17 20:35:40 crc kubenswrapper[4793]: I0217 20:35:40.192071 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerDied","Data":"3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d"} Feb 17 20:35:40 crc kubenswrapper[4793]: I0217 20:35:40.192109 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerStarted","Data":"59570854758f4aa880c7e127473fd1178ad6944302e85c0a69808e03885b9591"} Feb 17 20:35:41 crc kubenswrapper[4793]: I0217 20:35:41.206404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerStarted","Data":"7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317"} Feb 17 20:35:42 crc kubenswrapper[4793]: I0217 20:35:42.223321 4793 generic.go:334] "Generic (PLEG): container finished" podID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerID="7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317" exitCode=0 Feb 17 20:35:42 crc kubenswrapper[4793]: I0217 20:35:42.223406 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerDied","Data":"7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317"} Feb 17 20:35:43 crc kubenswrapper[4793]: I0217 20:35:43.234086 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerStarted","Data":"46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5"} Feb 17 20:35:43 crc kubenswrapper[4793]: I0217 20:35:43.253796 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjhd5" podStartSLOduration=2.772664211 podStartE2EDuration="5.253775532s" podCreationTimestamp="2026-02-17 20:35:38 +0000 UTC" firstStartedPulling="2026-02-17 20:35:40.199097029 +0000 UTC m=+1615.490795370" lastFinishedPulling="2026-02-17 20:35:42.68020835 +0000 UTC m=+1617.971906691" observedRunningTime="2026-02-17 20:35:43.249029517 +0000 UTC m=+1618.540727828" watchObservedRunningTime="2026-02-17 20:35:43.253775532 +0000 UTC m=+1618.545473843" Feb 17 20:35:48 crc kubenswrapper[4793]: I0217 20:35:48.539900 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:35:48 crc kubenswrapper[4793]: E0217 20:35:48.541082 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:35:49 crc kubenswrapper[4793]: I0217 20:35:49.237726 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:49 crc kubenswrapper[4793]: I0217 20:35:49.237801 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:49 crc kubenswrapper[4793]: I0217 20:35:49.320851 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:49 crc kubenswrapper[4793]: I0217 20:35:49.431827 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:49 crc kubenswrapper[4793]: I0217 20:35:49.573220 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjhd5"] Feb 17 20:35:50 crc kubenswrapper[4793]: I0217 20:35:50.539203 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:35:50 crc kubenswrapper[4793]: E0217 20:35:50.539835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.050580 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-a574-account-create-update-vdv59"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.102334 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-msx9t"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.113626 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jkj8g"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.125136 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pc6kf"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.132220 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ff3-account-create-update-bnxqh"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.140000 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-msx9t"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.147547 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-eac5-account-create-update-gmxcc"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.154961 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-a574-account-create-update-vdv59"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.162588 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jkj8g"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.170400 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pc6kf"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.178050 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0ff3-account-create-update-bnxqh"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.186245 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-eac5-account-create-update-gmxcc"] Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.370601 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jjhd5" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="registry-server" containerID="cri-o://46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5" gracePeriod=2 Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.564171 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f7d935-51e8-4a7d-90fe-1248b48d8361" path="/var/lib/kubelet/pods/21f7d935-51e8-4a7d-90fe-1248b48d8361/volumes" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.567721 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ca7f4a-8678-4fb0-ac5b-726a4cf6a674" path="/var/lib/kubelet/pods/47ca7f4a-8678-4fb0-ac5b-726a4cf6a674/volumes" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.568860 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4adfee96-44d3-49a6-a577-222950b89117" path="/var/lib/kubelet/pods/4adfee96-44d3-49a6-a577-222950b89117/volumes" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.571115 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5c72e3-f686-46fa-ac5d-684106630bd6" path="/var/lib/kubelet/pods/4c5c72e3-f686-46fa-ac5d-684106630bd6/volumes" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.572288 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f641fd-f904-45da-9da3-6e0d1545fc8c" path="/var/lib/kubelet/pods/a9f641fd-f904-45da-9da3-6e0d1545fc8c/volumes" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.574539 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1332e2d-624b-4d26-a53e-6b62e5bb3a86" path="/var/lib/kubelet/pods/b1332e2d-624b-4d26-a53e-6b62e5bb3a86/volumes" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.922878 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.937109 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9sw\" (UniqueName: \"kubernetes.io/projected/4c648b90-a27b-44f1-a927-a44649e5d14f-kube-api-access-6b9sw\") pod \"4c648b90-a27b-44f1-a927-a44649e5d14f\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.937165 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-utilities\") pod \"4c648b90-a27b-44f1-a927-a44649e5d14f\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.937420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-catalog-content\") pod \"4c648b90-a27b-44f1-a927-a44649e5d14f\" (UID: \"4c648b90-a27b-44f1-a927-a44649e5d14f\") " Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.938809 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-utilities" (OuterVolumeSpecName: "utilities") pod "4c648b90-a27b-44f1-a927-a44649e5d14f" (UID: "4c648b90-a27b-44f1-a927-a44649e5d14f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.950984 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c648b90-a27b-44f1-a927-a44649e5d14f-kube-api-access-6b9sw" (OuterVolumeSpecName: "kube-api-access-6b9sw") pod "4c648b90-a27b-44f1-a927-a44649e5d14f" (UID: "4c648b90-a27b-44f1-a927-a44649e5d14f"). InnerVolumeSpecName "kube-api-access-6b9sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:35:51 crc kubenswrapper[4793]: I0217 20:35:51.993624 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c648b90-a27b-44f1-a927-a44649e5d14f" (UID: "4c648b90-a27b-44f1-a927-a44649e5d14f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.039537 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9sw\" (UniqueName: \"kubernetes.io/projected/4c648b90-a27b-44f1-a927-a44649e5d14f-kube-api-access-6b9sw\") on node \"crc\" DevicePath \"\"" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.039580 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.039589 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c648b90-a27b-44f1-a927-a44649e5d14f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.387641 4793 generic.go:334] "Generic (PLEG): container finished" podID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerID="46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5" exitCode=0 Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.387680 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerDied","Data":"46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5"} Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.387723 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjhd5" event={"ID":"4c648b90-a27b-44f1-a927-a44649e5d14f","Type":"ContainerDied","Data":"59570854758f4aa880c7e127473fd1178ad6944302e85c0a69808e03885b9591"} Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.387740 4793 scope.go:117] "RemoveContainer" containerID="46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.387772 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjhd5" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.435003 4793 scope.go:117] "RemoveContainer" containerID="7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.439475 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjhd5"] Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.450714 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jjhd5"] Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.462772 4793 scope.go:117] "RemoveContainer" containerID="3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.510727 4793 scope.go:117] "RemoveContainer" containerID="46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5" Feb 17 20:35:52 crc kubenswrapper[4793]: E0217 20:35:52.512754 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5\": container with ID starting with 46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5 not found: ID does not exist" containerID="46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.512793 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5"} err="failed to get container status \"46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5\": rpc error: code = NotFound desc = could not find container \"46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5\": container with ID starting with 46460fd76204dc4d5a470603aa456342bf4ad34d5227adcca322c0b0aaadb8b5 not found: ID does not exist" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.512819 4793 scope.go:117] "RemoveContainer" containerID="7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317" Feb 17 20:35:52 crc kubenswrapper[4793]: E0217 20:35:52.513176 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317\": container with ID starting with 7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317 not found: ID does not exist" containerID="7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.513200 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317"} err="failed to get container status \"7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317\": rpc error: code = NotFound desc = could not find container \"7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317\": container with ID starting with 7250a233f3c02c0d500c829d027e78dc9b48abc765c675aa6f4447c3782de317 not found: ID does not exist" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.513215 4793 scope.go:117] "RemoveContainer" containerID="3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d" Feb 17 20:35:52 crc kubenswrapper[4793]: E0217 20:35:52.513582 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d\": container with ID starting with 3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d not found: ID does not exist" containerID="3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d" Feb 17 20:35:52 crc kubenswrapper[4793]: I0217 20:35:52.513614 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d"} err="failed to get container status \"3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d\": rpc error: code = NotFound desc = could not find container \"3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d\": container with ID starting with 3dbb411b1470f172ceae9cadc5bc3b8eeee409f3411e0d8cd48d9c1114380d3d not found: ID does not exist" Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.162105 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-b7fqg"] Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.172484 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2080-account-create-update-f57dq"] Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.185749 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-b7fqg"] Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.198070 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2080-account-create-update-f57dq"] Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.550796 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2814a987-de0c-4a65-a799-2fe73e19f35a" path="/var/lib/kubelet/pods/2814a987-de0c-4a65-a799-2fe73e19f35a/volumes" Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.552157 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" path="/var/lib/kubelet/pods/4c648b90-a27b-44f1-a927-a44649e5d14f/volumes" Feb 17 20:35:53 crc kubenswrapper[4793]: I0217 20:35:53.553570 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3cff25a-643c-4a27-9959-9e0b8602ea29" path="/var/lib/kubelet/pods/b3cff25a-643c-4a27-9959-9e0b8602ea29/volumes" Feb 17 20:36:02 crc kubenswrapper[4793]: I0217 20:36:02.539433 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:36:02 crc kubenswrapper[4793]: I0217 20:36:02.540387 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:36:02 crc kubenswrapper[4793]: E0217 20:36:02.540835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:36:02 crc kubenswrapper[4793]: E0217 20:36:02.540978 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:36:11 crc kubenswrapper[4793]: I0217 20:36:11.045313 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p8wvm"] Feb 17 20:36:11 crc kubenswrapper[4793]: I0217 20:36:11.062545 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p8wvm"] Feb 17 20:36:11 crc kubenswrapper[4793]: I0217 20:36:11.557390 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f15a23-297d-442b-93aa-108559ddfa0d" path="/var/lib/kubelet/pods/a3f15a23-297d-442b-93aa-108559ddfa0d/volumes" Feb 17 20:36:14 crc kubenswrapper[4793]: I0217 20:36:14.539469 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:36:14 crc kubenswrapper[4793]: E0217 20:36:14.540165 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:36:16 crc kubenswrapper[4793]: I0217 20:36:16.539962 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:36:16 crc kubenswrapper[4793]: E0217 20:36:16.540617 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:36:19 crc kubenswrapper[4793]: I0217 20:36:19.047486 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jsvrv"] Feb 17 20:36:19 crc kubenswrapper[4793]: I0217 20:36:19.060031 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jsvrv"] Feb 17 20:36:19 crc kubenswrapper[4793]: I0217 20:36:19.549248 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c91f0e-72e7-49ec-b9ee-c68845dd7cf9" path="/var/lib/kubelet/pods/63c91f0e-72e7-49ec-b9ee-c68845dd7cf9/volumes" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.295278 4793 scope.go:117] "RemoveContainer" containerID="e9d085884c8427a8fb435f34196daf568e48df87c54b36466f2362bad177edfc" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.342608 4793 scope.go:117] "RemoveContainer" containerID="5c1bf1303fb79e4b4261e4bcbde55dcd26ca7da82fa1c7d7eccb55bc6f0acdd0" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.430124 4793 scope.go:117] "RemoveContainer" containerID="c751b029ba524ea0fc5142dcb387c0ba1947526d2dc532adc9d449c609af33ec" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.485125 4793 scope.go:117] "RemoveContainer" containerID="187f81c72c174f486c5fba46522fd0de1b70c24afcae9b38970e55eb1f4c2593" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.555185 4793 scope.go:117] "RemoveContainer" containerID="25ab5d7f290182373d631df51b57c68c7fcf8d834539b10055f1d9c7d2d91048" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.584216 4793 scope.go:117] "RemoveContainer" containerID="7a47c7deeb58787c35d1a82b7c2e5c9e90e800212db7abff66d92911f77b088a" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.626779 4793 scope.go:117] "RemoveContainer" containerID="386862e9b36019e5d1d19f28c86a138dac428b0c379b31d351267b58510492ce" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.654904 4793 scope.go:117] "RemoveContainer" containerID="ad8a20613ce09836981a9d986bb395758f70c978f32ff17b9e81739c2a49e247" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.687159 4793 scope.go:117] "RemoveContainer" containerID="cb2fd5a33d659fe5cebb14ee4f364176346655f9310daaba79d67ff9d031eabb" Feb 17 20:36:20 crc kubenswrapper[4793]: I0217 20:36:20.726041 4793 scope.go:117] "RemoveContainer" containerID="4d70da5e59c8508cfad293da6b5646c8fb31eb7c3b57da54eb5e5465e4c130d5" Feb 17 20:36:25 crc kubenswrapper[4793]: I0217 20:36:25.051149 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-c9fsh"] Feb 17 20:36:25 crc kubenswrapper[4793]: I0217 20:36:25.063360 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-c9fsh"] Feb 17 20:36:25 crc kubenswrapper[4793]: I0217 20:36:25.546362 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:36:25 crc kubenswrapper[4793]: E0217 20:36:25.546570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:36:25 crc kubenswrapper[4793]: I0217 20:36:25.548974 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc87fa8-cc3f-4e13-8449-ad8338311cf5" path="/var/lib/kubelet/pods/1dc87fa8-cc3f-4e13-8449-ad8338311cf5/volumes" Feb 17 20:36:27 crc kubenswrapper[4793]: I0217 20:36:27.538680 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:36:27 crc kubenswrapper[4793]: E0217 20:36:27.539494 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.041501 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-054f-account-create-update-fh4mx"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.055369 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2df3-account-create-update-jksfn"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.086620 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0bda-account-create-update-x2lw8"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.093942 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2df3-account-create-update-jksfn"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.104958 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-sh9tg"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.113508 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-054f-account-create-update-fh4mx"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.121426 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0bda-account-create-update-x2lw8"] Feb 17 20:36:30 crc kubenswrapper[4793]: I0217 20:36:30.129197 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-sh9tg"] Feb 17 20:36:31 crc kubenswrapper[4793]: I0217 20:36:31.559305 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b543bb0-a68e-4940-a4dd-ebfd8736d2fe" path="/var/lib/kubelet/pods/0b543bb0-a68e-4940-a4dd-ebfd8736d2fe/volumes" Feb 17 20:36:31 crc kubenswrapper[4793]: I0217 20:36:31.560736 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5156dde4-196e-492f-a7a0-5c35b403b79c" path="/var/lib/kubelet/pods/5156dde4-196e-492f-a7a0-5c35b403b79c/volumes" Feb 17 20:36:31 crc kubenswrapper[4793]: I0217 20:36:31.561910 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce31ed9b-5e96-435c-bda8-ab78e42c647f" path="/var/lib/kubelet/pods/ce31ed9b-5e96-435c-bda8-ab78e42c647f/volumes" Feb 17 20:36:31 crc kubenswrapper[4793]: I0217 20:36:31.563140 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4beafcc-5a98-4860-8527-7c85e85b6eb5" path="/var/lib/kubelet/pods/e4beafcc-5a98-4860-8527-7c85e85b6eb5/volumes" Feb 17 20:36:33 crc kubenswrapper[4793]: I0217 20:36:33.044475 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jjc56"] Feb 17 20:36:33 crc kubenswrapper[4793]: I0217 20:36:33.055119 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jjc56"] Feb 17 20:36:33 crc kubenswrapper[4793]: I0217 20:36:33.584822 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888c0279-526c-49b5-a292-bb66ff8be459" path="/var/lib/kubelet/pods/888c0279-526c-49b5-a292-bb66ff8be459/volumes" Feb 17 20:36:37 crc kubenswrapper[4793]: I0217 20:36:37.538814 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:36:37 crc kubenswrapper[4793]: E0217 20:36:37.539201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:36:40 crc kubenswrapper[4793]: I0217 20:36:40.055781 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-crnqr"] Feb 17 20:36:40 crc kubenswrapper[4793]: I0217 20:36:40.066753 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-crnqr"] Feb 17 20:36:41 crc kubenswrapper[4793]: I0217 20:36:41.044833 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-srrw9"] Feb 17 20:36:41 crc kubenswrapper[4793]: I0217 20:36:41.055625 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-srrw9"] Feb 17 20:36:41 crc kubenswrapper[4793]: I0217 20:36:41.562130 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01675bbf-5d1b-4461-917e-65af0112b569" path="/var/lib/kubelet/pods/01675bbf-5d1b-4461-917e-65af0112b569/volumes" Feb 17 20:36:41 crc kubenswrapper[4793]: I0217 20:36:41.563203 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685042b7-f2e2-4163-9ee8-7e0dc67d9ec2" path="/var/lib/kubelet/pods/685042b7-f2e2-4163-9ee8-7e0dc67d9ec2/volumes" Feb 17 20:36:42 crc kubenswrapper[4793]: I0217 20:36:42.539834 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:36:42 crc kubenswrapper[4793]: E0217 20:36:42.540861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:36:51 crc kubenswrapper[4793]: I0217 20:36:51.539160 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:36:51 crc kubenswrapper[4793]: E0217 20:36:51.539850 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:36:57 crc kubenswrapper[4793]: I0217 20:36:57.538855 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:36:57 crc kubenswrapper[4793]: E0217 20:36:57.539929 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:37:04 crc kubenswrapper[4793]: I0217 20:37:04.538665 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:37:04 crc kubenswrapper[4793]: E0217 20:37:04.539681 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:37:12 crc kubenswrapper[4793]: I0217 20:37:12.538921 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:37:12 crc kubenswrapper[4793]: E0217 20:37:12.539622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:37:15 crc kubenswrapper[4793]: I0217 20:37:15.059682 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-v2gvx"] Feb 17 20:37:15 crc kubenswrapper[4793]: I0217 20:37:15.078030 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-v2gvx"] Feb 17 20:37:15 crc kubenswrapper[4793]: I0217 20:37:15.549499 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af0e3fd-140e-472e-8438-03bfd116c17f" path="/var/lib/kubelet/pods/0af0e3fd-140e-472e-8438-03bfd116c17f/volumes" Feb 17 20:37:18 crc kubenswrapper[4793]: I0217 20:37:18.539153 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:37:18 crc kubenswrapper[4793]: E0217 20:37:18.539619 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:37:20 crc kubenswrapper[4793]: I0217 20:37:20.978337 4793 scope.go:117] "RemoveContainer" containerID="20ada7fb8a8b5356545c3e756df75b3748efbb914add04318e2cdcde250fd82e" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.057456 4793 scope.go:117] "RemoveContainer" containerID="a049ade8c134dfeda3750b794896000259c0ff61abe8dec2610e7ec2bf1fe0c9" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.096004 4793 scope.go:117] "RemoveContainer" containerID="196d9afcf3b6d8aa7f632b287c0b087c79252793b083c97bb5c4862d3c549e9d" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.151387 4793 scope.go:117] "RemoveContainer" containerID="6d83704960d64a7d3ccb7d477f200a421c42f331ea2e66af0b58a42b944ada70" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.203846 4793 scope.go:117] "RemoveContainer" containerID="4e6cf63000667a6a635cc3fdbcef536b5596f36e0ba6ef8eb2d21e81a544b629" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.262821 4793 scope.go:117] "RemoveContainer" containerID="01d848bb2e204e7dac2fe16a6cd7599a5442134574c7cbe89247eecea662ef9d" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.324035 4793 scope.go:117] "RemoveContainer" containerID="67f7f8cc1245d022eb9d60d08432a9805fcd4fd8f34e1a98516f8b6f5e3b4f63" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.346451 4793 scope.go:117] "RemoveContainer" containerID="add472df2c083efb00cce6d30514c33b695ed0807db4f72429a61ecc15ff6ae8" Feb 17 20:37:21 crc kubenswrapper[4793]: I0217 20:37:21.369851 4793 scope.go:117] "RemoveContainer" containerID="7251798803b3ed112c5600803202520e70a3272aff8dd4ebd6a365d6e3ffbf53" Feb 17 20:37:27 crc kubenswrapper[4793]: I0217 20:37:27.538940 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:37:27 crc kubenswrapper[4793]: E0217 20:37:27.540025 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:37:30 crc kubenswrapper[4793]: I0217 20:37:30.046525 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-v8sgz"] Feb 17 20:37:30 crc kubenswrapper[4793]: I0217 20:37:30.063586 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-v8sgz"] Feb 17 20:37:30 crc kubenswrapper[4793]: I0217 20:37:30.540272 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:37:30 crc kubenswrapper[4793]: E0217 20:37:30.541226 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.065808 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t8llz"] Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.085073 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pwpbc"] Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.096192 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t8llz"] Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.105858 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pwpbc"] Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.562489 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0003e927-ea9c-49fb-83e9-5bfc8cd90f46" path="/var/lib/kubelet/pods/0003e927-ea9c-49fb-83e9-5bfc8cd90f46/volumes" Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.563944 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39911c8f-ceae-41a0-a891-fc8677d87ec3" path="/var/lib/kubelet/pods/39911c8f-ceae-41a0-a891-fc8677d87ec3/volumes" Feb 17 20:37:31 crc kubenswrapper[4793]: I0217 20:37:31.565117 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a" path="/var/lib/kubelet/pods/6a587d11-ed16-44ac-a36e-dfd7a7ed3f6a/volumes" Feb 17 20:37:39 crc kubenswrapper[4793]: I0217 20:37:39.539061 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:37:39 crc kubenswrapper[4793]: E0217 20:37:39.540134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:37:44 crc kubenswrapper[4793]: I0217 20:37:44.031068 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-t9mdr"] Feb 17 20:37:44 crc kubenswrapper[4793]: I0217 20:37:44.038310 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-t9mdr"] Feb 17 20:37:45 crc kubenswrapper[4793]: I0217 20:37:45.565281 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:37:45 crc kubenswrapper[4793]: E0217 20:37:45.565935 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:37:45 crc kubenswrapper[4793]: I0217 20:37:45.575329 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95da6bd5-17d8-4402-bb8a-87b0c03feebf" path="/var/lib/kubelet/pods/95da6bd5-17d8-4402-bb8a-87b0c03feebf/volumes" Feb 17 20:37:53 crc kubenswrapper[4793]: I0217 20:37:53.539585 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:37:53 crc kubenswrapper[4793]: E0217 20:37:53.541609 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:37:56 crc kubenswrapper[4793]: I0217 20:37:56.539384 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:37:56 crc kubenswrapper[4793]: E0217 20:37:56.540411 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:38:07 crc kubenswrapper[4793]: I0217 20:38:07.538988 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:38:07 crc kubenswrapper[4793]: E0217 20:38:07.540302 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:38:10 crc kubenswrapper[4793]: I0217 20:38:10.539252 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:38:10 crc kubenswrapper[4793]: E0217 20:38:10.540515 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.064173 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6a73-account-create-update-n7s7r"] Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.074801 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-782tr"] Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.084522 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-782tr"] Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.109906 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6a73-account-create-update-n7s7r"] Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.545625 4793 scope.go:117] "RemoveContainer" containerID="58b0cd3fe7288ff4fdf482c0fc5ff538635fd3d320a9e30c90486d2fdd8cd090" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.552209 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4740a37c-5ca4-4230-a803-5622eedf745e" path="/var/lib/kubelet/pods/4740a37c-5ca4-4230-a803-5622eedf745e/volumes" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.553461 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad590bda-099d-49c9-9eb3-a21c732af87c" path="/var/lib/kubelet/pods/ad590bda-099d-49c9-9eb3-a21c732af87c/volumes" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.610713 4793 scope.go:117] "RemoveContainer" containerID="3e64ce5727213b76b7b03f4c76bbb1eeb249f2558a1211d86bfe179f8c2a46ff" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.677462 4793 scope.go:117] "RemoveContainer" containerID="dd965f0e963a528411b5ef5e6c7f2d77924a9bcb0dead9167736a1c9314e01df" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.749927 4793 scope.go:117] "RemoveContainer" containerID="b704a648e4275076bd6ca3cb60d2914d8d744c17b710e7a0cdb2d01191a2fcb7" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.832331 4793 scope.go:117] "RemoveContainer" containerID="73e5afb6aa24b29336bd47851d87446bfb319209439940e0861f142e2d1d4837" Feb 17 20:38:21 crc kubenswrapper[4793]: I0217 20:38:21.863632 4793 scope.go:117] "RemoveContainer" containerID="53bfef70fd083bfacea3340a381d133e63ce4bf88d9fbf8744476a1e69af45ea" Feb 17 20:38:22 crc kubenswrapper[4793]: I0217 20:38:22.038539 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fflrc"] Feb 17 20:38:22 crc kubenswrapper[4793]: I0217 20:38:22.063868 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2742-account-create-update-2qvj2"] Feb 17 20:38:22 crc kubenswrapper[4793]: I0217 20:38:22.075256 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fflrc"] Feb 17 20:38:22 crc kubenswrapper[4793]: I0217 20:38:22.084418 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2742-account-create-update-2qvj2"] Feb 17 20:38:22 crc kubenswrapper[4793]: I0217 20:38:22.539164 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.043303 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ebb4-account-create-update-qzc2g"] Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.055541 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-m7spg"] Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.064883 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ebb4-account-create-update-qzc2g"] Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.074088 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-m7spg"] Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.244898 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"f2a323d75a407629219fa5b2905fa5ac7ed60a5eba2fc9d51937b68309d06918"} Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.539191 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.552912 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0ba79e-3947-4453-8d93-c7b4ea51aa86" path="/var/lib/kubelet/pods/3c0ba79e-3947-4453-8d93-c7b4ea51aa86/volumes" Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.554374 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868c46d3-c502-4bf7-8553-d7ce9b2a466d" path="/var/lib/kubelet/pods/868c46d3-c502-4bf7-8553-d7ce9b2a466d/volumes" Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.555118 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c908ae8c-adac-443d-9073-64b72a90660a" path="/var/lib/kubelet/pods/c908ae8c-adac-443d-9073-64b72a90660a/volumes" Feb 17 20:38:23 crc kubenswrapper[4793]: I0217 20:38:23.556168 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9514926-d5cf-41ca-aeb2-41444d597e1a" path="/var/lib/kubelet/pods/e9514926-d5cf-41ca-aeb2-41444d597e1a/volumes" Feb 17 20:38:24 crc kubenswrapper[4793]: I0217 20:38:24.278582 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a"} Feb 17 20:38:26 crc kubenswrapper[4793]: I0217 20:38:26.302447 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" exitCode=1 Feb 17 20:38:26 crc kubenswrapper[4793]: I0217 20:38:26.302537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a"} Feb 17 20:38:26 crc kubenswrapper[4793]: I0217 20:38:26.303203 4793 scope.go:117] "RemoveContainer" containerID="486d2468c54694f45044fc5cf6d16102a606873e9fdb9f9c155191ee91950299" Feb 17 20:38:26 crc kubenswrapper[4793]: I0217 20:38:26.304316 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:38:26 crc kubenswrapper[4793]: E0217 20:38:26.304948 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:38:26 crc kubenswrapper[4793]: I0217 20:38:26.962412 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:38:27 crc kubenswrapper[4793]: I0217 20:38:27.314853 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:38:27 crc kubenswrapper[4793]: E0217 20:38:27.315282 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:38:31 crc kubenswrapper[4793]: I0217 20:38:31.962297 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:38:31 crc kubenswrapper[4793]: I0217 20:38:31.963267 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:38:31 crc kubenswrapper[4793]: I0217 20:38:31.963297 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:38:31 crc kubenswrapper[4793]: I0217 20:38:31.964610 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:38:31 crc kubenswrapper[4793]: E0217 20:38:31.965172 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:38:45 crc kubenswrapper[4793]: I0217 20:38:45.552505 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:38:45 crc kubenswrapper[4793]: E0217 20:38:45.553361 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:38:55 crc kubenswrapper[4793]: I0217 20:38:55.059433 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j65pr"] Feb 17 20:38:55 crc kubenswrapper[4793]: I0217 20:38:55.072641 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j65pr"] Feb 17 20:38:55 crc kubenswrapper[4793]: I0217 20:38:55.556623 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7625c8d-9409-461d-98e0-2a9507baa803" path="/var/lib/kubelet/pods/c7625c8d-9409-461d-98e0-2a9507baa803/volumes" Feb 17 20:39:00 crc kubenswrapper[4793]: I0217 20:39:00.538443 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:39:00 crc kubenswrapper[4793]: E0217 20:39:00.539329 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:39:13 crc kubenswrapper[4793]: I0217 20:39:13.539104 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:39:13 crc kubenswrapper[4793]: E0217 20:39:13.540208 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:39:19 crc kubenswrapper[4793]: I0217 20:39:19.061240 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz7ss"] Feb 17 20:39:19 crc kubenswrapper[4793]: I0217 20:39:19.072167 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mz7ss"] Feb 17 20:39:19 crc kubenswrapper[4793]: I0217 20:39:19.555243 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3ef9a1-5f91-4493-8070-1b6125e6f1b7" path="/var/lib/kubelet/pods/6f3ef9a1-5f91-4493-8070-1b6125e6f1b7/volumes" Feb 17 20:39:22 crc kubenswrapper[4793]: I0217 20:39:22.074418 4793 scope.go:117] "RemoveContainer" containerID="cc59c70bbc9c1e26e1511ed56b5293d1116d33d1d4f8b5355a0d6797e2d9905e" Feb 17 20:39:22 crc kubenswrapper[4793]: I0217 20:39:22.106780 4793 scope.go:117] "RemoveContainer" containerID="668d841ab9833fb36f0ef65dec516b0c70e08667e3daeb7d648c893349262a95" Feb 17 20:39:22 crc kubenswrapper[4793]: I0217 20:39:22.185480 4793 scope.go:117] "RemoveContainer" containerID="31516eea28ddc1c5dc4a9f0a6269c69472e2a0e31595921e743d213004f1a261" Feb 17 20:39:22 crc kubenswrapper[4793]: I0217 20:39:22.227221 4793 scope.go:117] "RemoveContainer" containerID="3b5d0d7b807126fec419fe119b026a07c8076d0243752fd2da322bf17f0a87e6" Feb 17 20:39:22 crc kubenswrapper[4793]: I0217 20:39:22.277363 4793 scope.go:117] "RemoveContainer" containerID="3a94ea801a53a9cfb660a1aaa81a38ffb8f11fcb10ace5284d2f5d2d72eeb2ab" Feb 17 20:39:22 crc kubenswrapper[4793]: I0217 20:39:22.343129 4793 scope.go:117] "RemoveContainer" containerID="f0e98817664600a87bf052a7a181e0a01e8bb3ff6b51578e14672719efedc416" Feb 17 20:39:24 crc kubenswrapper[4793]: I0217 20:39:24.047663 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7wxh2"] Feb 17 20:39:24 crc kubenswrapper[4793]: I0217 20:39:24.061396 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7wxh2"] Feb 17 20:39:25 crc kubenswrapper[4793]: I0217 20:39:25.555421 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26f2ccf-e459-4a2b-9cc8-c06d7165c94b" path="/var/lib/kubelet/pods/f26f2ccf-e459-4a2b-9cc8-c06d7165c94b/volumes" Feb 17 20:39:27 crc kubenswrapper[4793]: I0217 20:39:27.538380 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:39:27 crc kubenswrapper[4793]: E0217 20:39:27.539776 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:39:41 crc kubenswrapper[4793]: I0217 20:39:41.540509 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:39:41 crc kubenswrapper[4793]: E0217 20:39:41.541270 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:39:56 crc kubenswrapper[4793]: I0217 20:39:56.538557 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:39:56 crc kubenswrapper[4793]: E0217 20:39:56.540207 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:40:04 crc kubenswrapper[4793]: I0217 20:40:04.062322 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnp4t"] Feb 17 20:40:04 crc kubenswrapper[4793]: I0217 20:40:04.071535 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tnp4t"] Feb 17 20:40:05 crc kubenswrapper[4793]: I0217 20:40:05.550750 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c590626-10f6-4866-afb8-a765d8692f9f" path="/var/lib/kubelet/pods/7c590626-10f6-4866-afb8-a765d8692f9f/volumes" Feb 17 20:40:10 crc kubenswrapper[4793]: I0217 20:40:10.538771 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:40:10 crc kubenswrapper[4793]: E0217 20:40:10.539417 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:40:22 crc kubenswrapper[4793]: I0217 20:40:22.502035 4793 scope.go:117] "RemoveContainer" containerID="88bf108f9fe48bf18b0439c8057371e6e7939a7a3766867d8fedf06734a0827b" Feb 17 20:40:22 crc kubenswrapper[4793]: I0217 20:40:22.538850 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:40:22 crc kubenswrapper[4793]: E0217 20:40:22.539426 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:40:22 crc kubenswrapper[4793]: I0217 20:40:22.550067 4793 scope.go:117] "RemoveContainer" containerID="a87fbadb1f85d2f9d8404a012dbbaee7330bcf614fcd8bc0839bd9e112aab960" Feb 17 20:40:34 crc kubenswrapper[4793]: I0217 20:40:34.538864 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:40:34 crc kubenswrapper[4793]: E0217 20:40:34.539597 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:40:46 crc kubenswrapper[4793]: I0217 20:40:46.538729 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:40:46 crc kubenswrapper[4793]: E0217 20:40:46.539530 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:40:50 crc kubenswrapper[4793]: I0217 20:40:50.101722 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:40:50 crc kubenswrapper[4793]: I0217 20:40:50.102431 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:40:57 crc kubenswrapper[4793]: I0217 20:40:57.538962 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:40:57 crc kubenswrapper[4793]: E0217 20:40:57.539944 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:41:10 crc kubenswrapper[4793]: I0217 20:41:10.539405 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:41:10 crc kubenswrapper[4793]: E0217 20:41:10.540246 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:41:20 crc kubenswrapper[4793]: I0217 20:41:20.101786 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:41:20 crc kubenswrapper[4793]: I0217 20:41:20.103133 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:41:21 crc kubenswrapper[4793]: I0217 20:41:21.539463 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:41:21 crc kubenswrapper[4793]: E0217 20:41:21.540038 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:41:22 crc kubenswrapper[4793]: I0217 20:41:22.647004 4793 scope.go:117] "RemoveContainer" containerID="e10e56187a927a2cbc6047d7f70d8fe5a22e6ab9adb70d520b955743efdaf099" Feb 17 20:41:22 crc kubenswrapper[4793]: I0217 20:41:22.685159 4793 scope.go:117] "RemoveContainer" containerID="0a38587431cd828d970839f503a46e45d8472c6428a39cf2f0cfc0b906febb59" Feb 17 20:41:22 crc kubenswrapper[4793]: I0217 20:41:22.727371 4793 scope.go:117] "RemoveContainer" containerID="3cd6de01f7ed915be93bca06f2ea807744470325c280dccf88f19bed76e49d92" Feb 17 20:41:35 crc kubenswrapper[4793]: I0217 20:41:35.551474 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:41:35 crc kubenswrapper[4793]: E0217 20:41:35.554466 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:41:49 crc kubenswrapper[4793]: I0217 20:41:49.538360 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:41:49 crc kubenswrapper[4793]: E0217 20:41:49.539729 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.101964 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.102024 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.102066 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.102540 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2a323d75a407629219fa5b2905fa5ac7ed60a5eba2fc9d51937b68309d06918"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.102599 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://f2a323d75a407629219fa5b2905fa5ac7ed60a5eba2fc9d51937b68309d06918" gracePeriod=600 Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.342967 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="f2a323d75a407629219fa5b2905fa5ac7ed60a5eba2fc9d51937b68309d06918" exitCode=0 Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.343050 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"f2a323d75a407629219fa5b2905fa5ac7ed60a5eba2fc9d51937b68309d06918"} Feb 17 20:41:50 crc kubenswrapper[4793]: I0217 20:41:50.343310 4793 scope.go:117] "RemoveContainer" containerID="e8740416c581b51430af3b22759513fb6d7b5e33952ed72a05afaa7e49bf8b6f" Feb 17 20:41:51 crc kubenswrapper[4793]: I0217 20:41:51.355574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242"} Feb 17 20:42:03 crc kubenswrapper[4793]: I0217 20:42:03.539767 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:42:03 crc kubenswrapper[4793]: E0217 20:42:03.540904 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:42:15 crc kubenswrapper[4793]: I0217 20:42:15.547040 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:42:15 crc kubenswrapper[4793]: E0217 20:42:15.547899 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.540592 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:42:28 crc kubenswrapper[4793]: E0217 20:42:28.543719 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.931487 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c4q"] Feb 17 20:42:28 crc kubenswrapper[4793]: E0217 20:42:28.932183 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="registry-server" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.932217 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="registry-server" Feb 17 20:42:28 crc kubenswrapper[4793]: E0217 20:42:28.932254 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="extract-utilities" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.932272 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="extract-utilities" Feb 17 20:42:28 crc kubenswrapper[4793]: E0217 20:42:28.932334 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="extract-content" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.932353 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="extract-content" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.932832 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c648b90-a27b-44f1-a927-a44649e5d14f" containerName="registry-server" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.935744 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.952760 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c4q"] Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.995451 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6lzd\" (UniqueName: \"kubernetes.io/projected/03df7849-ae17-4967-9179-efb26666f13f-kube-api-access-j6lzd\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.995584 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-utilities\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:28 crc kubenswrapper[4793]: I0217 20:42:28.996104 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-catalog-content\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.097678 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6lzd\" (UniqueName: \"kubernetes.io/projected/03df7849-ae17-4967-9179-efb26666f13f-kube-api-access-j6lzd\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.097799 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-utilities\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.097951 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-catalog-content\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.098512 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-catalog-content\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.098510 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-utilities\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.116453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6lzd\" (UniqueName: \"kubernetes.io/projected/03df7849-ae17-4967-9179-efb26666f13f-kube-api-access-j6lzd\") pod \"redhat-marketplace-q7c4q\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.281656 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.747438 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c4q"] Feb 17 20:42:29 crc kubenswrapper[4793]: W0217 20:42:29.748915 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03df7849_ae17_4967_9179_efb26666f13f.slice/crio-f98a00efc835f4121fc2692b2f92c9f95c6d615e7c475c7626d83214865ed0b2 WatchSource:0}: Error finding container f98a00efc835f4121fc2692b2f92c9f95c6d615e7c475c7626d83214865ed0b2: Status 404 returned error can't find the container with id f98a00efc835f4121fc2692b2f92c9f95c6d615e7c475c7626d83214865ed0b2 Feb 17 20:42:29 crc kubenswrapper[4793]: I0217 20:42:29.784368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c4q" event={"ID":"03df7849-ae17-4967-9179-efb26666f13f","Type":"ContainerStarted","Data":"f98a00efc835f4121fc2692b2f92c9f95c6d615e7c475c7626d83214865ed0b2"} Feb 17 20:42:30 crc kubenswrapper[4793]: I0217 20:42:30.800170 4793 generic.go:334] "Generic (PLEG): container finished" podID="03df7849-ae17-4967-9179-efb26666f13f" containerID="e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a" exitCode=0 Feb 17 20:42:30 crc kubenswrapper[4793]: I0217 20:42:30.800547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c4q" event={"ID":"03df7849-ae17-4967-9179-efb26666f13f","Type":"ContainerDied","Data":"e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a"} Feb 17 20:42:30 crc kubenswrapper[4793]: I0217 20:42:30.804867 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:42:32 crc kubenswrapper[4793]: I0217 20:42:32.828667 4793 generic.go:334] "Generic (PLEG): container finished" podID="03df7849-ae17-4967-9179-efb26666f13f" containerID="67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae" exitCode=0 Feb 17 20:42:32 crc kubenswrapper[4793]: I0217 20:42:32.828769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c4q" event={"ID":"03df7849-ae17-4967-9179-efb26666f13f","Type":"ContainerDied","Data":"67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae"} Feb 17 20:42:33 crc kubenswrapper[4793]: I0217 20:42:33.843738 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c4q" event={"ID":"03df7849-ae17-4967-9179-efb26666f13f","Type":"ContainerStarted","Data":"67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede"} Feb 17 20:42:33 crc kubenswrapper[4793]: I0217 20:42:33.878323 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7c4q" podStartSLOduration=3.417669886 podStartE2EDuration="5.878302366s" podCreationTimestamp="2026-02-17 20:42:28 +0000 UTC" firstStartedPulling="2026-02-17 20:42:30.804223155 +0000 UTC m=+2026.095921486" lastFinishedPulling="2026-02-17 20:42:33.264855645 +0000 UTC m=+2028.556553966" observedRunningTime="2026-02-17 20:42:33.871880817 +0000 UTC m=+2029.163579138" watchObservedRunningTime="2026-02-17 20:42:33.878302366 +0000 UTC m=+2029.170000687" Feb 17 20:42:39 crc kubenswrapper[4793]: I0217 20:42:39.282303 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:39 crc kubenswrapper[4793]: I0217 20:42:39.282997 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:39 crc kubenswrapper[4793]: I0217 20:42:39.356972 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:39 crc kubenswrapper[4793]: I0217 20:42:39.969125 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:40 crc kubenswrapper[4793]: I0217 20:42:40.034113 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c4q"] Feb 17 20:42:41 crc kubenswrapper[4793]: I0217 20:42:41.930984 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7c4q" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="registry-server" containerID="cri-o://67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede" gracePeriod=2 Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.448092 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.506439 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-catalog-content\") pod \"03df7849-ae17-4967-9179-efb26666f13f\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.506509 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6lzd\" (UniqueName: \"kubernetes.io/projected/03df7849-ae17-4967-9179-efb26666f13f-kube-api-access-j6lzd\") pod \"03df7849-ae17-4967-9179-efb26666f13f\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.506676 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-utilities\") pod \"03df7849-ae17-4967-9179-efb26666f13f\" (UID: \"03df7849-ae17-4967-9179-efb26666f13f\") " Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.507629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-utilities" (OuterVolumeSpecName: "utilities") pod "03df7849-ae17-4967-9179-efb26666f13f" (UID: "03df7849-ae17-4967-9179-efb26666f13f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.516594 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03df7849-ae17-4967-9179-efb26666f13f-kube-api-access-j6lzd" (OuterVolumeSpecName: "kube-api-access-j6lzd") pod "03df7849-ae17-4967-9179-efb26666f13f" (UID: "03df7849-ae17-4967-9179-efb26666f13f"). InnerVolumeSpecName "kube-api-access-j6lzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.531298 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03df7849-ae17-4967-9179-efb26666f13f" (UID: "03df7849-ae17-4967-9179-efb26666f13f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.609145 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.609181 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6lzd\" (UniqueName: \"kubernetes.io/projected/03df7849-ae17-4967-9179-efb26666f13f-kube-api-access-j6lzd\") on node \"crc\" DevicePath \"\"" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.609192 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03df7849-ae17-4967-9179-efb26666f13f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.942023 4793 generic.go:334] "Generic (PLEG): container finished" podID="03df7849-ae17-4967-9179-efb26666f13f" containerID="67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede" exitCode=0 Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.942106 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c4q" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.942129 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c4q" event={"ID":"03df7849-ae17-4967-9179-efb26666f13f","Type":"ContainerDied","Data":"67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede"} Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.942571 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c4q" event={"ID":"03df7849-ae17-4967-9179-efb26666f13f","Type":"ContainerDied","Data":"f98a00efc835f4121fc2692b2f92c9f95c6d615e7c475c7626d83214865ed0b2"} Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.942590 4793 scope.go:117] "RemoveContainer" containerID="67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.973944 4793 scope.go:117] "RemoveContainer" containerID="67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae" Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.987735 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c4q"] Feb 17 20:42:42 crc kubenswrapper[4793]: I0217 20:42:42.997063 4793 scope.go:117] "RemoveContainer" containerID="e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.002254 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c4q"] Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.109296 4793 scope.go:117] "RemoveContainer" containerID="67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede" Feb 17 20:42:43 crc kubenswrapper[4793]: E0217 20:42:43.109619 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede\": container with ID starting with 67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede not found: ID does not exist" containerID="67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.109648 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede"} err="failed to get container status \"67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede\": rpc error: code = NotFound desc = could not find container \"67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede\": container with ID starting with 67f3ee753cac35ba42d6836ff39f53ab54737a0121df32a052a04df381dadede not found: ID does not exist" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.109669 4793 scope.go:117] "RemoveContainer" containerID="67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae" Feb 17 20:42:43 crc kubenswrapper[4793]: E0217 20:42:43.110048 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae\": container with ID starting with 67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae not found: ID does not exist" containerID="67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.110068 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae"} err="failed to get container status \"67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae\": rpc error: code = NotFound desc = could not find container \"67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae\": container with ID starting with 67eb052cd7a85bee8a96ef150ed0a1ae4a7ce2bc265b31aaad2e67a154d6d1ae not found: ID does not exist" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.110082 4793 scope.go:117] "RemoveContainer" containerID="e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a" Feb 17 20:42:43 crc kubenswrapper[4793]: E0217 20:42:43.111319 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a\": container with ID starting with e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a not found: ID does not exist" containerID="e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.111337 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a"} err="failed to get container status \"e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a\": rpc error: code = NotFound desc = could not find container \"e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a\": container with ID starting with e00e40f48ad1b6099a3bf2e30a83788189f1f3da8f636b20f3c4163392e0758a not found: ID does not exist" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.538715 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:42:43 crc kubenswrapper[4793]: E0217 20:42:43.538995 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:42:43 crc kubenswrapper[4793]: I0217 20:42:43.550545 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03df7849-ae17-4967-9179-efb26666f13f" path="/var/lib/kubelet/pods/03df7849-ae17-4967-9179-efb26666f13f/volumes" Feb 17 20:42:58 crc kubenswrapper[4793]: I0217 20:42:58.539258 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:42:58 crc kubenswrapper[4793]: E0217 20:42:58.541283 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:43:12 crc kubenswrapper[4793]: I0217 20:43:12.538856 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:43:12 crc kubenswrapper[4793]: E0217 20:43:12.539838 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:43:23 crc kubenswrapper[4793]: I0217 20:43:23.538783 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:43:23 crc kubenswrapper[4793]: E0217 20:43:23.539487 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:43:35 crc kubenswrapper[4793]: I0217 20:43:35.548288 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:43:36 crc kubenswrapper[4793]: I0217 20:43:36.781720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21"} Feb 17 20:43:36 crc kubenswrapper[4793]: I0217 20:43:36.963090 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:43:38 crc kubenswrapper[4793]: I0217 20:43:38.805253 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" exitCode=1 Feb 17 20:43:38 crc kubenswrapper[4793]: I0217 20:43:38.805328 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21"} Feb 17 20:43:38 crc kubenswrapper[4793]: I0217 20:43:38.805395 4793 scope.go:117] "RemoveContainer" containerID="78f9926446b14d2e3fb8bcb4b55afbe8f9e91047ce38c73cd4450435a5e7406a" Feb 17 20:43:38 crc kubenswrapper[4793]: I0217 20:43:38.806385 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:43:38 crc kubenswrapper[4793]: E0217 20:43:38.806935 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:43:41 crc kubenswrapper[4793]: I0217 20:43:41.962562 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:43:41 crc kubenswrapper[4793]: I0217 20:43:41.963658 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:43:41 crc kubenswrapper[4793]: I0217 20:43:41.963724 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:43:41 crc kubenswrapper[4793]: I0217 20:43:41.965059 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:43:41 crc kubenswrapper[4793]: E0217 20:43:41.965595 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:43:50 crc kubenswrapper[4793]: I0217 20:43:50.102328 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:43:50 crc kubenswrapper[4793]: I0217 20:43:50.103127 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:43:53 crc kubenswrapper[4793]: I0217 20:43:53.538774 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:43:53 crc kubenswrapper[4793]: E0217 20:43:53.539987 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:44:08 crc kubenswrapper[4793]: I0217 20:44:08.538509 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:44:08 crc kubenswrapper[4793]: E0217 20:44:08.539424 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:44:20 crc kubenswrapper[4793]: I0217 20:44:20.102484 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:44:20 crc kubenswrapper[4793]: I0217 20:44:20.103270 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:44:20 crc kubenswrapper[4793]: I0217 20:44:20.539037 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:44:20 crc kubenswrapper[4793]: E0217 20:44:20.539571 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:44:32 crc kubenswrapper[4793]: I0217 20:44:32.539686 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:44:32 crc kubenswrapper[4793]: E0217 20:44:32.541060 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:44:46 crc kubenswrapper[4793]: I0217 20:44:46.538644 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:44:46 crc kubenswrapper[4793]: E0217 20:44:46.539332 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.101705 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.102034 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.102075 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.102863 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.102922 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" gracePeriod=600 Feb 17 20:44:50 crc kubenswrapper[4793]: E0217 20:44:50.246189 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.870104 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" exitCode=0 Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.870167 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242"} Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.870230 4793 scope.go:117] "RemoveContainer" containerID="f2a323d75a407629219fa5b2905fa5ac7ed60a5eba2fc9d51937b68309d06918" Feb 17 20:44:50 crc kubenswrapper[4793]: I0217 20:44:50.870940 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:44:50 crc kubenswrapper[4793]: E0217 20:44:50.871287 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:44:57 crc kubenswrapper[4793]: I0217 20:44:57.547200 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:44:57 crc kubenswrapper[4793]: E0217 20:44:57.552604 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.145021 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr"] Feb 17 20:45:00 crc kubenswrapper[4793]: E0217 20:45:00.145779 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="extract-utilities" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.145795 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="extract-utilities" Feb 17 20:45:00 crc kubenswrapper[4793]: E0217 20:45:00.145839 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="extract-content" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.145847 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="extract-content" Feb 17 20:45:00 crc kubenswrapper[4793]: E0217 20:45:00.145859 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="registry-server" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.145866 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="registry-server" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.146100 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="03df7849-ae17-4967-9179-efb26666f13f" containerName="registry-server" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.146856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.148889 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.150110 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.157022 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr"] Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.321576 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zkmx\" (UniqueName: \"kubernetes.io/projected/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-kube-api-access-9zkmx\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.321900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-secret-volume\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.322103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-config-volume\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.423799 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zkmx\" (UniqueName: \"kubernetes.io/projected/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-kube-api-access-9zkmx\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.424295 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-secret-volume\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.424418 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-config-volume\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.425455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-config-volume\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.434659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-secret-volume\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.443422 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zkmx\" (UniqueName: \"kubernetes.io/projected/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-kube-api-access-9zkmx\") pod \"collect-profiles-29522685-vlsbr\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.472061 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:00 crc kubenswrapper[4793]: I0217 20:45:00.991214 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr"] Feb 17 20:45:02 crc kubenswrapper[4793]: I0217 20:45:02.000460 4793 generic.go:334] "Generic (PLEG): container finished" podID="834a07b0-aa02-4f0a-a22a-45e8e70b3c61" containerID="262313c8d9ebefd12e57e663586dc3f404e418b30974760bdbbc2272a6570d13" exitCode=0 Feb 17 20:45:02 crc kubenswrapper[4793]: I0217 20:45:02.000520 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" event={"ID":"834a07b0-aa02-4f0a-a22a-45e8e70b3c61","Type":"ContainerDied","Data":"262313c8d9ebefd12e57e663586dc3f404e418b30974760bdbbc2272a6570d13"} Feb 17 20:45:02 crc kubenswrapper[4793]: I0217 20:45:02.000812 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" event={"ID":"834a07b0-aa02-4f0a-a22a-45e8e70b3c61","Type":"ContainerStarted","Data":"59bc7dc8969a52b3bae927f3846f4aa5f043b41fcf96f6ca6a58e1d46433e2a6"} Feb 17 20:45:02 crc kubenswrapper[4793]: I0217 20:45:02.538967 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:45:02 crc kubenswrapper[4793]: E0217 20:45:02.539750 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.424359 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.595324 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-config-volume\") pod \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.596377 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zkmx\" (UniqueName: \"kubernetes.io/projected/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-kube-api-access-9zkmx\") pod \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.596487 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-config-volume" (OuterVolumeSpecName: "config-volume") pod "834a07b0-aa02-4f0a-a22a-45e8e70b3c61" (UID: "834a07b0-aa02-4f0a-a22a-45e8e70b3c61"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.596848 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-secret-volume\") pod \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\" (UID: \"834a07b0-aa02-4f0a-a22a-45e8e70b3c61\") " Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.597783 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.602953 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "834a07b0-aa02-4f0a-a22a-45e8e70b3c61" (UID: "834a07b0-aa02-4f0a-a22a-45e8e70b3c61"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.603470 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-kube-api-access-9zkmx" (OuterVolumeSpecName: "kube-api-access-9zkmx") pod "834a07b0-aa02-4f0a-a22a-45e8e70b3c61" (UID: "834a07b0-aa02-4f0a-a22a-45e8e70b3c61"). InnerVolumeSpecName "kube-api-access-9zkmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.699846 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 20:45:03 crc kubenswrapper[4793]: I0217 20:45:03.699895 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zkmx\" (UniqueName: \"kubernetes.io/projected/834a07b0-aa02-4f0a-a22a-45e8e70b3c61-kube-api-access-9zkmx\") on node \"crc\" DevicePath \"\"" Feb 17 20:45:04 crc kubenswrapper[4793]: I0217 20:45:04.032053 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" event={"ID":"834a07b0-aa02-4f0a-a22a-45e8e70b3c61","Type":"ContainerDied","Data":"59bc7dc8969a52b3bae927f3846f4aa5f043b41fcf96f6ca6a58e1d46433e2a6"} Feb 17 20:45:04 crc kubenswrapper[4793]: I0217 20:45:04.032102 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59bc7dc8969a52b3bae927f3846f4aa5f043b41fcf96f6ca6a58e1d46433e2a6" Feb 17 20:45:04 crc kubenswrapper[4793]: I0217 20:45:04.032173 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr" Feb 17 20:45:04 crc kubenswrapper[4793]: I0217 20:45:04.517528 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn"] Feb 17 20:45:04 crc kubenswrapper[4793]: I0217 20:45:04.528598 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522640-pzkfn"] Feb 17 20:45:05 crc kubenswrapper[4793]: I0217 20:45:05.636400 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82552c6f-59af-4d20-97ca-82384997434e" path="/var/lib/kubelet/pods/82552c6f-59af-4d20-97ca-82384997434e/volumes" Feb 17 20:45:12 crc kubenswrapper[4793]: I0217 20:45:12.541197 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:45:12 crc kubenswrapper[4793]: E0217 20:45:12.542485 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:45:16 crc kubenswrapper[4793]: I0217 20:45:16.539315 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:45:16 crc kubenswrapper[4793]: E0217 20:45:16.540613 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:45:22 crc kubenswrapper[4793]: I0217 20:45:22.918173 4793 scope.go:117] "RemoveContainer" containerID="2d5c8abc676df6c99749c8d4d08bb36673b9ee955a6d1ad588580b8068f05b80" Feb 17 20:45:23 crc kubenswrapper[4793]: I0217 20:45:23.539182 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:45:23 crc kubenswrapper[4793]: E0217 20:45:23.540290 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:45:31 crc kubenswrapper[4793]: I0217 20:45:31.540026 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:45:31 crc kubenswrapper[4793]: E0217 20:45:31.541296 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:45:35 crc kubenswrapper[4793]: I0217 20:45:35.547002 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:45:35 crc kubenswrapper[4793]: E0217 20:45:35.547752 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:45:42 crc kubenswrapper[4793]: I0217 20:45:42.540078 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:45:42 crc kubenswrapper[4793]: E0217 20:45:42.540954 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:45:48 crc kubenswrapper[4793]: I0217 20:45:48.539673 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:45:48 crc kubenswrapper[4793]: E0217 20:45:48.540724 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:45:56 crc kubenswrapper[4793]: I0217 20:45:56.539041 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:45:56 crc kubenswrapper[4793]: E0217 20:45:56.540100 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.389625 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdwlh"] Feb 17 20:46:02 crc kubenswrapper[4793]: E0217 20:46:02.391340 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834a07b0-aa02-4f0a-a22a-45e8e70b3c61" containerName="collect-profiles" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.391378 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="834a07b0-aa02-4f0a-a22a-45e8e70b3c61" containerName="collect-profiles" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.392001 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="834a07b0-aa02-4f0a-a22a-45e8e70b3c61" containerName="collect-profiles" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.395765 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.456431 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdwlh"] Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.575405 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/90cf15dc-73ab-478f-8eb8-9401067aeb1b-kube-api-access-2thn4\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.575738 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-catalog-content\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.575859 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-utilities\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.678183 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/90cf15dc-73ab-478f-8eb8-9401067aeb1b-kube-api-access-2thn4\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.678283 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-catalog-content\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.678322 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-utilities\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.678823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-catalog-content\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.678931 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-utilities\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.701429 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/90cf15dc-73ab-478f-8eb8-9401067aeb1b-kube-api-access-2thn4\") pod \"redhat-operators-wdwlh\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:02 crc kubenswrapper[4793]: I0217 20:46:02.772103 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:03 crc kubenswrapper[4793]: W0217 20:46:03.252834 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90cf15dc_73ab_478f_8eb8_9401067aeb1b.slice/crio-636cd76c24dc04e4483ecff029505db0601d32aff7e9c38fb234f0d968c44415 WatchSource:0}: Error finding container 636cd76c24dc04e4483ecff029505db0601d32aff7e9c38fb234f0d968c44415: Status 404 returned error can't find the container with id 636cd76c24dc04e4483ecff029505db0601d32aff7e9c38fb234f0d968c44415 Feb 17 20:46:03 crc kubenswrapper[4793]: I0217 20:46:03.260954 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdwlh"] Feb 17 20:46:03 crc kubenswrapper[4793]: I0217 20:46:03.539371 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:46:03 crc kubenswrapper[4793]: E0217 20:46:03.539923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:46:04 crc kubenswrapper[4793]: I0217 20:46:04.109815 4793 generic.go:334] "Generic (PLEG): container finished" podID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerID="1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759" exitCode=0 Feb 17 20:46:04 crc kubenswrapper[4793]: I0217 20:46:04.109885 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerDied","Data":"1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759"} Feb 17 20:46:04 crc kubenswrapper[4793]: I0217 20:46:04.109934 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerStarted","Data":"636cd76c24dc04e4483ecff029505db0601d32aff7e9c38fb234f0d968c44415"} Feb 17 20:46:05 crc kubenswrapper[4793]: I0217 20:46:05.137204 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerStarted","Data":"8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b"} Feb 17 20:46:07 crc kubenswrapper[4793]: I0217 20:46:07.157826 4793 generic.go:334] "Generic (PLEG): container finished" podID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerID="8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b" exitCode=0 Feb 17 20:46:07 crc kubenswrapper[4793]: I0217 20:46:07.157894 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerDied","Data":"8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b"} Feb 17 20:46:08 crc kubenswrapper[4793]: I0217 20:46:08.553811 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:46:08 crc kubenswrapper[4793]: E0217 20:46:08.555762 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:46:09 crc kubenswrapper[4793]: I0217 20:46:09.180367 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerStarted","Data":"f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7"} Feb 17 20:46:09 crc kubenswrapper[4793]: I0217 20:46:09.213491 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdwlh" podStartSLOduration=3.379577575 podStartE2EDuration="7.213468056s" podCreationTimestamp="2026-02-17 20:46:02 +0000 UTC" firstStartedPulling="2026-02-17 20:46:04.112028231 +0000 UTC m=+2239.403726552" lastFinishedPulling="2026-02-17 20:46:07.945918712 +0000 UTC m=+2243.237617033" observedRunningTime="2026-02-17 20:46:09.205183482 +0000 UTC m=+2244.496881803" watchObservedRunningTime="2026-02-17 20:46:09.213468056 +0000 UTC m=+2244.505166377" Feb 17 20:46:12 crc kubenswrapper[4793]: I0217 20:46:12.772524 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:12 crc kubenswrapper[4793]: I0217 20:46:12.773215 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:13 crc kubenswrapper[4793]: I0217 20:46:13.832257 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wdwlh" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="registry-server" probeResult="failure" output=< Feb 17 20:46:13 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:46:13 crc kubenswrapper[4793]: > Feb 17 20:46:15 crc kubenswrapper[4793]: I0217 20:46:15.545152 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:46:15 crc kubenswrapper[4793]: E0217 20:46:15.545862 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:46:21 crc kubenswrapper[4793]: I0217 20:46:21.539905 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:46:21 crc kubenswrapper[4793]: E0217 20:46:21.541064 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:46:22 crc kubenswrapper[4793]: I0217 20:46:22.836115 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:22 crc kubenswrapper[4793]: I0217 20:46:22.900529 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:23 crc kubenswrapper[4793]: I0217 20:46:23.077143 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdwlh"] Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.323870 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wdwlh" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="registry-server" containerID="cri-o://f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7" gracePeriod=2 Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.784521 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.856414 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/90cf15dc-73ab-478f-8eb8-9401067aeb1b-kube-api-access-2thn4\") pod \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.856479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-utilities\") pod \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.856601 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-catalog-content\") pod \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\" (UID: \"90cf15dc-73ab-478f-8eb8-9401067aeb1b\") " Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.858584 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-utilities" (OuterVolumeSpecName: "utilities") pod "90cf15dc-73ab-478f-8eb8-9401067aeb1b" (UID: "90cf15dc-73ab-478f-8eb8-9401067aeb1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.863901 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90cf15dc-73ab-478f-8eb8-9401067aeb1b-kube-api-access-2thn4" (OuterVolumeSpecName: "kube-api-access-2thn4") pod "90cf15dc-73ab-478f-8eb8-9401067aeb1b" (UID: "90cf15dc-73ab-478f-8eb8-9401067aeb1b"). InnerVolumeSpecName "kube-api-access-2thn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.958427 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2thn4\" (UniqueName: \"kubernetes.io/projected/90cf15dc-73ab-478f-8eb8-9401067aeb1b-kube-api-access-2thn4\") on node \"crc\" DevicePath \"\"" Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.958458 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:46:24 crc kubenswrapper[4793]: I0217 20:46:24.973958 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90cf15dc-73ab-478f-8eb8-9401067aeb1b" (UID: "90cf15dc-73ab-478f-8eb8-9401067aeb1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.060756 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf15dc-73ab-478f-8eb8-9401067aeb1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.339938 4793 generic.go:334] "Generic (PLEG): container finished" podID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerID="f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7" exitCode=0 Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.340022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerDied","Data":"f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7"} Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.340063 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdwlh" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.340112 4793 scope.go:117] "RemoveContainer" containerID="f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.340089 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdwlh" event={"ID":"90cf15dc-73ab-478f-8eb8-9401067aeb1b","Type":"ContainerDied","Data":"636cd76c24dc04e4483ecff029505db0601d32aff7e9c38fb234f0d968c44415"} Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.389043 4793 scope.go:117] "RemoveContainer" containerID="8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.410506 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdwlh"] Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.431155 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wdwlh"] Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.431391 4793 scope.go:117] "RemoveContainer" containerID="1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.491614 4793 scope.go:117] "RemoveContainer" containerID="f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7" Feb 17 20:46:25 crc kubenswrapper[4793]: E0217 20:46:25.492179 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7\": container with ID starting with f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7 not found: ID does not exist" containerID="f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.492233 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7"} err="failed to get container status \"f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7\": rpc error: code = NotFound desc = could not find container \"f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7\": container with ID starting with f5331f7e8645614fec4df9141d6bf8043f25c9b3d34650cf7f35d54fe495d1a7 not found: ID does not exist" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.492267 4793 scope.go:117] "RemoveContainer" containerID="8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b" Feb 17 20:46:25 crc kubenswrapper[4793]: E0217 20:46:25.492599 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b\": container with ID starting with 8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b not found: ID does not exist" containerID="8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.492639 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b"} err="failed to get container status \"8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b\": rpc error: code = NotFound desc = could not find container \"8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b\": container with ID starting with 8431141e52c9e43bd065d36c053a79c36002304a5adae2930f1a4452d9411c1b not found: ID does not exist" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.492671 4793 scope.go:117] "RemoveContainer" containerID="1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759" Feb 17 20:46:25 crc kubenswrapper[4793]: E0217 20:46:25.493051 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759\": container with ID starting with 1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759 not found: ID does not exist" containerID="1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.493085 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759"} err="failed to get container status \"1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759\": rpc error: code = NotFound desc = could not find container \"1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759\": container with ID starting with 1d946951f15bdcba396a163a8256c7711593af97b39398d1f757dca2f7d9b759 not found: ID does not exist" Feb 17 20:46:25 crc kubenswrapper[4793]: I0217 20:46:25.552049 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" path="/var/lib/kubelet/pods/90cf15dc-73ab-478f-8eb8-9401067aeb1b/volumes" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.085229 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 20:46:26 crc kubenswrapper[4793]: E0217 20:46:26.085757 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="extract-content" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.085821 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="extract-content" Feb 17 20:46:26 crc kubenswrapper[4793]: E0217 20:46:26.085894 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="registry-server" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.085942 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="registry-server" Feb 17 20:46:26 crc kubenswrapper[4793]: E0217 20:46:26.086022 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="extract-utilities" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.086077 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="extract-utilities" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.086302 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="90cf15dc-73ab-478f-8eb8-9401067aeb1b" containerName="registry-server" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.087631 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.105251 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.187852 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-utilities\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.188095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-catalog-content\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.188391 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9lf\" (UniqueName: \"kubernetes.io/projected/bd88ef10-9c55-455d-80a9-1d578498f602-kube-api-access-lp9lf\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.290828 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-utilities\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.290892 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-catalog-content\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.290973 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9lf\" (UniqueName: \"kubernetes.io/projected/bd88ef10-9c55-455d-80a9-1d578498f602-kube-api-access-lp9lf\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.291645 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-utilities\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.291996 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-catalog-content\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.312196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9lf\" (UniqueName: \"kubernetes.io/projected/bd88ef10-9c55-455d-80a9-1d578498f602-kube-api-access-lp9lf\") pod \"certified-operators-8w8zr\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.419147 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.544420 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:46:26 crc kubenswrapper[4793]: E0217 20:46:26.544889 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:46:26 crc kubenswrapper[4793]: I0217 20:46:26.883959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 20:46:27 crc kubenswrapper[4793]: I0217 20:46:27.364409 4793 generic.go:334] "Generic (PLEG): container finished" podID="bd88ef10-9c55-455d-80a9-1d578498f602" containerID="230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e" exitCode=0 Feb 17 20:46:27 crc kubenswrapper[4793]: I0217 20:46:27.364485 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w8zr" event={"ID":"bd88ef10-9c55-455d-80a9-1d578498f602","Type":"ContainerDied","Data":"230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e"} Feb 17 20:46:27 crc kubenswrapper[4793]: I0217 20:46:27.364527 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w8zr" event={"ID":"bd88ef10-9c55-455d-80a9-1d578498f602","Type":"ContainerStarted","Data":"1311cf0cd24de19d27743363a34122fc5afaa1825f461758278c2f4568b79933"} Feb 17 20:46:32 crc kubenswrapper[4793]: I0217 20:46:32.414850 4793 generic.go:334] "Generic (PLEG): container finished" podID="bd88ef10-9c55-455d-80a9-1d578498f602" containerID="bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc" exitCode=0 Feb 17 20:46:32 crc kubenswrapper[4793]: I0217 20:46:32.417091 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w8zr" event={"ID":"bd88ef10-9c55-455d-80a9-1d578498f602","Type":"ContainerDied","Data":"bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc"} Feb 17 20:46:33 crc kubenswrapper[4793]: I0217 20:46:33.429856 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w8zr" event={"ID":"bd88ef10-9c55-455d-80a9-1d578498f602","Type":"ContainerStarted","Data":"2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b"} Feb 17 20:46:33 crc kubenswrapper[4793]: I0217 20:46:33.470169 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8w8zr" podStartSLOduration=1.81275958 podStartE2EDuration="7.470129766s" podCreationTimestamp="2026-02-17 20:46:26 +0000 UTC" firstStartedPulling="2026-02-17 20:46:27.367009136 +0000 UTC m=+2262.658707487" lastFinishedPulling="2026-02-17 20:46:33.024379322 +0000 UTC m=+2268.316077673" observedRunningTime="2026-02-17 20:46:33.450111652 +0000 UTC m=+2268.741810023" watchObservedRunningTime="2026-02-17 20:46:33.470129766 +0000 UTC m=+2268.761828117" Feb 17 20:46:33 crc kubenswrapper[4793]: I0217 20:46:33.539487 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:46:33 crc kubenswrapper[4793]: E0217 20:46:33.539794 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:46:36 crc kubenswrapper[4793]: I0217 20:46:36.420241 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:36 crc kubenswrapper[4793]: I0217 20:46:36.421039 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:36 crc kubenswrapper[4793]: I0217 20:46:36.503361 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:41 crc kubenswrapper[4793]: I0217 20:46:41.540360 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:46:41 crc kubenswrapper[4793]: E0217 20:46:41.541437 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:46:45 crc kubenswrapper[4793]: I0217 20:46:45.544982 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:46:45 crc kubenswrapper[4793]: E0217 20:46:45.546125 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:46:46 crc kubenswrapper[4793]: I0217 20:46:46.502026 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 20:46:46 crc kubenswrapper[4793]: I0217 20:46:46.626972 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 20:46:46 crc kubenswrapper[4793]: I0217 20:46:46.684451 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qhhj"] Feb 17 20:46:46 crc kubenswrapper[4793]: I0217 20:46:46.685079 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8qhhj" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="registry-server" containerID="cri-o://622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75" gracePeriod=2 Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.170090 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.265378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-catalog-content\") pod \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.265671 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26n4c\" (UniqueName: \"kubernetes.io/projected/d4103f9c-b66f-4042-aa7f-044dc702eb2b-kube-api-access-26n4c\") pod \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.265862 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-utilities\") pod \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\" (UID: \"d4103f9c-b66f-4042-aa7f-044dc702eb2b\") " Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.268844 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-utilities" (OuterVolumeSpecName: "utilities") pod "d4103f9c-b66f-4042-aa7f-044dc702eb2b" (UID: "d4103f9c-b66f-4042-aa7f-044dc702eb2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.280457 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4103f9c-b66f-4042-aa7f-044dc702eb2b-kube-api-access-26n4c" (OuterVolumeSpecName: "kube-api-access-26n4c") pod "d4103f9c-b66f-4042-aa7f-044dc702eb2b" (UID: "d4103f9c-b66f-4042-aa7f-044dc702eb2b"). InnerVolumeSpecName "kube-api-access-26n4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.342452 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4103f9c-b66f-4042-aa7f-044dc702eb2b" (UID: "d4103f9c-b66f-4042-aa7f-044dc702eb2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.367553 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26n4c\" (UniqueName: \"kubernetes.io/projected/d4103f9c-b66f-4042-aa7f-044dc702eb2b-kube-api-access-26n4c\") on node \"crc\" DevicePath \"\"" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.367583 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.367592 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4103f9c-b66f-4042-aa7f-044dc702eb2b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.628946 4793 generic.go:334] "Generic (PLEG): container finished" podID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerID="622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75" exitCode=0 Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.628987 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qhhj" event={"ID":"d4103f9c-b66f-4042-aa7f-044dc702eb2b","Type":"ContainerDied","Data":"622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75"} Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.629016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8qhhj" event={"ID":"d4103f9c-b66f-4042-aa7f-044dc702eb2b","Type":"ContainerDied","Data":"bad4971bcef7a907156fb6d9018cb7e2721ef04f8250e806473c3e942d54e285"} Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.629032 4793 scope.go:117] "RemoveContainer" containerID="622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.629053 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8qhhj" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.649902 4793 scope.go:117] "RemoveContainer" containerID="74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.653375 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8qhhj"] Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.661030 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8qhhj"] Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.676805 4793 scope.go:117] "RemoveContainer" containerID="51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.717795 4793 scope.go:117] "RemoveContainer" containerID="622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75" Feb 17 20:46:47 crc kubenswrapper[4793]: E0217 20:46:47.718308 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75\": container with ID starting with 622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75 not found: ID does not exist" containerID="622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.718352 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75"} err="failed to get container status \"622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75\": rpc error: code = NotFound desc = could not find container \"622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75\": container with ID starting with 622345fa7bba9e6c190ee06caa30280ba24ec9a6757a1dd69612037c4907df75 not found: ID does not exist" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.718384 4793 scope.go:117] "RemoveContainer" containerID="74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9" Feb 17 20:46:47 crc kubenswrapper[4793]: E0217 20:46:47.718813 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9\": container with ID starting with 74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9 not found: ID does not exist" containerID="74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.718845 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9"} err="failed to get container status \"74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9\": rpc error: code = NotFound desc = could not find container \"74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9\": container with ID starting with 74bb6f313828153e3ceb5269d92fa0bb3a998e9d53ad787a3a1d82c2a0da62f9 not found: ID does not exist" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.718872 4793 scope.go:117] "RemoveContainer" containerID="51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3" Feb 17 20:46:47 crc kubenswrapper[4793]: E0217 20:46:47.719135 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3\": container with ID starting with 51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3 not found: ID does not exist" containerID="51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3" Feb 17 20:46:47 crc kubenswrapper[4793]: I0217 20:46:47.719179 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3"} err="failed to get container status \"51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3\": rpc error: code = NotFound desc = could not find container \"51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3\": container with ID starting with 51a11e5442f80b0187ec85136fd48d1c9aec77a5c99ff285f112b324b01395d3 not found: ID does not exist" Feb 17 20:46:49 crc kubenswrapper[4793]: I0217 20:46:49.553220 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" path="/var/lib/kubelet/pods/d4103f9c-b66f-4042-aa7f-044dc702eb2b/volumes" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.579147 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wxhdg"] Feb 17 20:46:52 crc kubenswrapper[4793]: E0217 20:46:52.580105 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="extract-content" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.580124 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="extract-content" Feb 17 20:46:52 crc kubenswrapper[4793]: E0217 20:46:52.580152 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="registry-server" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.580164 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="registry-server" Feb 17 20:46:52 crc kubenswrapper[4793]: E0217 20:46:52.580189 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="extract-utilities" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.580200 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="extract-utilities" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.580552 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4103f9c-b66f-4042-aa7f-044dc702eb2b" containerName="registry-server" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.582907 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.595328 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxhdg"] Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.682822 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-utilities\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.682893 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7ft\" (UniqueName: \"kubernetes.io/projected/50b12c31-2da4-4c53-80b6-a366820c1b15-kube-api-access-qc7ft\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.683028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-catalog-content\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.785427 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-utilities\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.785496 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7ft\" (UniqueName: \"kubernetes.io/projected/50b12c31-2da4-4c53-80b6-a366820c1b15-kube-api-access-qc7ft\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.785592 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-catalog-content\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.786179 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-utilities\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.786217 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-catalog-content\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.814745 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7ft\" (UniqueName: \"kubernetes.io/projected/50b12c31-2da4-4c53-80b6-a366820c1b15-kube-api-access-qc7ft\") pod \"community-operators-wxhdg\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:52 crc kubenswrapper[4793]: I0217 20:46:52.912148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:46:53 crc kubenswrapper[4793]: I0217 20:46:53.421790 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wxhdg"] Feb 17 20:46:53 crc kubenswrapper[4793]: I0217 20:46:53.686541 4793 generic.go:334] "Generic (PLEG): container finished" podID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerID="9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149" exitCode=0 Feb 17 20:46:53 crc kubenswrapper[4793]: I0217 20:46:53.686648 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhdg" event={"ID":"50b12c31-2da4-4c53-80b6-a366820c1b15","Type":"ContainerDied","Data":"9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149"} Feb 17 20:46:53 crc kubenswrapper[4793]: I0217 20:46:53.686826 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhdg" event={"ID":"50b12c31-2da4-4c53-80b6-a366820c1b15","Type":"ContainerStarted","Data":"ff63ce9079d1ccf768fb8fea77315a23c4e3b1dc3dadea8339cbc6a6b9a63677"} Feb 17 20:46:54 crc kubenswrapper[4793]: I0217 20:46:54.539155 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:46:54 crc kubenswrapper[4793]: E0217 20:46:54.539646 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:46:55 crc kubenswrapper[4793]: I0217 20:46:55.738596 4793 generic.go:334] "Generic (PLEG): container finished" podID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerID="260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651" exitCode=0 Feb 17 20:46:55 crc kubenswrapper[4793]: I0217 20:46:55.738714 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhdg" event={"ID":"50b12c31-2da4-4c53-80b6-a366820c1b15","Type":"ContainerDied","Data":"260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651"} Feb 17 20:46:56 crc kubenswrapper[4793]: I0217 20:46:56.752415 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhdg" event={"ID":"50b12c31-2da4-4c53-80b6-a366820c1b15","Type":"ContainerStarted","Data":"4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc"} Feb 17 20:46:56 crc kubenswrapper[4793]: I0217 20:46:56.783366 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wxhdg" podStartSLOduration=2.314867603 podStartE2EDuration="4.783343146s" podCreationTimestamp="2026-02-17 20:46:52 +0000 UTC" firstStartedPulling="2026-02-17 20:46:53.687994631 +0000 UTC m=+2288.979692942" lastFinishedPulling="2026-02-17 20:46:56.156470144 +0000 UTC m=+2291.448168485" observedRunningTime="2026-02-17 20:46:56.77742603 +0000 UTC m=+2292.069124361" watchObservedRunningTime="2026-02-17 20:46:56.783343146 +0000 UTC m=+2292.075041477" Feb 17 20:46:57 crc kubenswrapper[4793]: I0217 20:46:57.538808 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:46:57 crc kubenswrapper[4793]: E0217 20:46:57.539393 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:47:02 crc kubenswrapper[4793]: I0217 20:47:02.913342 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:47:02 crc kubenswrapper[4793]: I0217 20:47:02.913742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:47:02 crc kubenswrapper[4793]: I0217 20:47:02.983267 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:47:03 crc kubenswrapper[4793]: I0217 20:47:03.864811 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:47:03 crc kubenswrapper[4793]: I0217 20:47:03.916166 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxhdg"] Feb 17 20:47:05 crc kubenswrapper[4793]: I0217 20:47:05.842987 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wxhdg" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="registry-server" containerID="cri-o://4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc" gracePeriod=2 Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.348400 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.464757 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-catalog-content\") pod \"50b12c31-2da4-4c53-80b6-a366820c1b15\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.464908 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-utilities\") pod \"50b12c31-2da4-4c53-80b6-a366820c1b15\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.464992 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7ft\" (UniqueName: \"kubernetes.io/projected/50b12c31-2da4-4c53-80b6-a366820c1b15-kube-api-access-qc7ft\") pod \"50b12c31-2da4-4c53-80b6-a366820c1b15\" (UID: \"50b12c31-2da4-4c53-80b6-a366820c1b15\") " Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.465935 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-utilities" (OuterVolumeSpecName: "utilities") pod "50b12c31-2da4-4c53-80b6-a366820c1b15" (UID: "50b12c31-2da4-4c53-80b6-a366820c1b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.476482 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b12c31-2da4-4c53-80b6-a366820c1b15-kube-api-access-qc7ft" (OuterVolumeSpecName: "kube-api-access-qc7ft") pod "50b12c31-2da4-4c53-80b6-a366820c1b15" (UID: "50b12c31-2da4-4c53-80b6-a366820c1b15"). InnerVolumeSpecName "kube-api-access-qc7ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.525696 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50b12c31-2da4-4c53-80b6-a366820c1b15" (UID: "50b12c31-2da4-4c53-80b6-a366820c1b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.567863 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.567940 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b12c31-2da4-4c53-80b6-a366820c1b15-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.567959 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc7ft\" (UniqueName: \"kubernetes.io/projected/50b12c31-2da4-4c53-80b6-a366820c1b15-kube-api-access-qc7ft\") on node \"crc\" DevicePath \"\"" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.857447 4793 generic.go:334] "Generic (PLEG): container finished" podID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerID="4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc" exitCode=0 Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.857510 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wxhdg" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.857526 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhdg" event={"ID":"50b12c31-2da4-4c53-80b6-a366820c1b15","Type":"ContainerDied","Data":"4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc"} Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.857588 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wxhdg" event={"ID":"50b12c31-2da4-4c53-80b6-a366820c1b15","Type":"ContainerDied","Data":"ff63ce9079d1ccf768fb8fea77315a23c4e3b1dc3dadea8339cbc6a6b9a63677"} Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.857723 4793 scope.go:117] "RemoveContainer" containerID="4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.881064 4793 scope.go:117] "RemoveContainer" containerID="260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.907278 4793 scope.go:117] "RemoveContainer" containerID="9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.911526 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wxhdg"] Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.925278 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wxhdg"] Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.966580 4793 scope.go:117] "RemoveContainer" containerID="4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc" Feb 17 20:47:06 crc kubenswrapper[4793]: E0217 20:47:06.967093 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc\": container with ID starting with 4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc not found: ID does not exist" containerID="4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.967121 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc"} err="failed to get container status \"4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc\": rpc error: code = NotFound desc = could not find container \"4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc\": container with ID starting with 4fac309207cfe9abc3999c2fe0602ea87c8136677d3f4a8f91b12906dbc682fc not found: ID does not exist" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.967142 4793 scope.go:117] "RemoveContainer" containerID="260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651" Feb 17 20:47:06 crc kubenswrapper[4793]: E0217 20:47:06.967435 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651\": container with ID starting with 260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651 not found: ID does not exist" containerID="260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.967455 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651"} err="failed to get container status \"260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651\": rpc error: code = NotFound desc = could not find container \"260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651\": container with ID starting with 260af62bcfe07e08ec338ddcb70d02601153711eb3e8f7832b780b3f427c8651 not found: ID does not exist" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.967469 4793 scope.go:117] "RemoveContainer" containerID="9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149" Feb 17 20:47:06 crc kubenswrapper[4793]: E0217 20:47:06.968017 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149\": container with ID starting with 9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149 not found: ID does not exist" containerID="9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149" Feb 17 20:47:06 crc kubenswrapper[4793]: I0217 20:47:06.968076 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149"} err="failed to get container status \"9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149\": rpc error: code = NotFound desc = could not find container \"9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149\": container with ID starting with 9e230362a5d487094fdde277416301373ca9060c5c65fb2a9177057e276bc149 not found: ID does not exist" Feb 17 20:47:07 crc kubenswrapper[4793]: I0217 20:47:07.555024 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" path="/var/lib/kubelet/pods/50b12c31-2da4-4c53-80b6-a366820c1b15/volumes" Feb 17 20:47:08 crc kubenswrapper[4793]: I0217 20:47:08.539059 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:47:08 crc kubenswrapper[4793]: E0217 20:47:08.541363 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:47:09 crc kubenswrapper[4793]: I0217 20:47:09.539999 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:47:09 crc kubenswrapper[4793]: E0217 20:47:09.540582 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:47:20 crc kubenswrapper[4793]: I0217 20:47:20.539365 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:47:20 crc kubenswrapper[4793]: E0217 20:47:20.540727 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:47:22 crc kubenswrapper[4793]: I0217 20:47:22.538792 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:47:22 crc kubenswrapper[4793]: E0217 20:47:22.540167 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:47:31 crc kubenswrapper[4793]: I0217 20:47:31.539064 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:47:31 crc kubenswrapper[4793]: E0217 20:47:31.540238 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:47:35 crc kubenswrapper[4793]: I0217 20:47:35.546440 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:47:35 crc kubenswrapper[4793]: E0217 20:47:35.547036 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:47:45 crc kubenswrapper[4793]: I0217 20:47:45.545503 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:47:45 crc kubenswrapper[4793]: E0217 20:47:45.548059 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:47:49 crc kubenswrapper[4793]: I0217 20:47:49.539124 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:47:49 crc kubenswrapper[4793]: E0217 20:47:49.540163 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:47:57 crc kubenswrapper[4793]: I0217 20:47:57.538505 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:47:57 crc kubenswrapper[4793]: E0217 20:47:57.539178 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:48:04 crc kubenswrapper[4793]: I0217 20:48:04.539269 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:48:04 crc kubenswrapper[4793]: E0217 20:48:04.540202 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:48:12 crc kubenswrapper[4793]: I0217 20:48:12.539239 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:48:12 crc kubenswrapper[4793]: E0217 20:48:12.540403 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:48:19 crc kubenswrapper[4793]: I0217 20:48:19.539242 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:48:19 crc kubenswrapper[4793]: E0217 20:48:19.541370 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:48:24 crc kubenswrapper[4793]: I0217 20:48:24.539160 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:48:24 crc kubenswrapper[4793]: E0217 20:48:24.540060 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:48:33 crc kubenswrapper[4793]: I0217 20:48:33.540210 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:48:33 crc kubenswrapper[4793]: E0217 20:48:33.541166 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:48:38 crc kubenswrapper[4793]: I0217 20:48:38.539021 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:48:38 crc kubenswrapper[4793]: E0217 20:48:38.539963 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:48:44 crc kubenswrapper[4793]: I0217 20:48:44.538801 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:48:45 crc kubenswrapper[4793]: I0217 20:48:45.003349 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc"} Feb 17 20:48:46 crc kubenswrapper[4793]: I0217 20:48:46.963159 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:48:47 crc kubenswrapper[4793]: I0217 20:48:47.027728 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" exitCode=1 Feb 17 20:48:47 crc kubenswrapper[4793]: I0217 20:48:47.027794 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc"} Feb 17 20:48:47 crc kubenswrapper[4793]: I0217 20:48:47.027832 4793 scope.go:117] "RemoveContainer" containerID="84fbb96dfa2addc9152e102e30b92a10f641912eb20ef4d12a86e69cc4010b21" Feb 17 20:48:47 crc kubenswrapper[4793]: I0217 20:48:47.030212 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:48:47 crc kubenswrapper[4793]: E0217 20:48:47.031402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:48:51 crc kubenswrapper[4793]: I0217 20:48:51.962484 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:48:51 crc kubenswrapper[4793]: I0217 20:48:51.963017 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:48:51 crc kubenswrapper[4793]: I0217 20:48:51.963028 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:48:51 crc kubenswrapper[4793]: I0217 20:48:51.963779 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:48:51 crc kubenswrapper[4793]: E0217 20:48:51.964013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:48:53 crc kubenswrapper[4793]: I0217 20:48:53.539772 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:48:53 crc kubenswrapper[4793]: E0217 20:48:53.541024 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:49:05 crc kubenswrapper[4793]: I0217 20:49:05.551518 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:49:05 crc kubenswrapper[4793]: E0217 20:49:05.552996 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:49:06 crc kubenswrapper[4793]: I0217 20:49:06.540246 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:49:06 crc kubenswrapper[4793]: E0217 20:49:06.540843 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:49:18 crc kubenswrapper[4793]: I0217 20:49:18.539251 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:49:18 crc kubenswrapper[4793]: I0217 20:49:18.540039 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:49:18 crc kubenswrapper[4793]: E0217 20:49:18.540232 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:49:18 crc kubenswrapper[4793]: E0217 20:49:18.540409 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:49:31 crc kubenswrapper[4793]: I0217 20:49:31.539154 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:49:31 crc kubenswrapper[4793]: E0217 20:49:31.540533 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:49:33 crc kubenswrapper[4793]: I0217 20:49:33.539720 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:49:33 crc kubenswrapper[4793]: E0217 20:49:33.540302 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:49:43 crc kubenswrapper[4793]: I0217 20:49:43.539228 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:49:43 crc kubenswrapper[4793]: E0217 20:49:43.544514 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:49:48 crc kubenswrapper[4793]: I0217 20:49:48.538349 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:49:48 crc kubenswrapper[4793]: E0217 20:49:48.539276 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:49:56 crc kubenswrapper[4793]: I0217 20:49:56.539316 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:49:57 crc kubenswrapper[4793]: I0217 20:49:57.761310 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"48133be2347bc1cf331fc0563b46c0e45b4e2135a15478a64b9be4152198a949"} Feb 17 20:49:59 crc kubenswrapper[4793]: I0217 20:49:59.553152 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:49:59 crc kubenswrapper[4793]: E0217 20:49:59.554025 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:50:12 crc kubenswrapper[4793]: I0217 20:50:12.539057 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:50:12 crc kubenswrapper[4793]: E0217 20:50:12.539684 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:50:25 crc kubenswrapper[4793]: I0217 20:50:25.556756 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:50:25 crc kubenswrapper[4793]: E0217 20:50:25.560154 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:50:37 crc kubenswrapper[4793]: I0217 20:50:37.538561 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:50:37 crc kubenswrapper[4793]: E0217 20:50:37.539434 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:50:51 crc kubenswrapper[4793]: I0217 20:50:51.539230 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:50:51 crc kubenswrapper[4793]: E0217 20:50:51.539999 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:51:03 crc kubenswrapper[4793]: I0217 20:51:03.538903 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:51:03 crc kubenswrapper[4793]: E0217 20:51:03.539635 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:51:15 crc kubenswrapper[4793]: I0217 20:51:15.549565 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:51:15 crc kubenswrapper[4793]: E0217 20:51:15.550503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:51:28 crc kubenswrapper[4793]: I0217 20:51:28.539852 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:51:28 crc kubenswrapper[4793]: E0217 20:51:28.540887 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:51:40 crc kubenswrapper[4793]: I0217 20:51:40.539842 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:51:40 crc kubenswrapper[4793]: E0217 20:51:40.542202 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:51:53 crc kubenswrapper[4793]: I0217 20:51:53.541496 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:51:53 crc kubenswrapper[4793]: E0217 20:51:53.542854 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:52:08 crc kubenswrapper[4793]: I0217 20:52:08.539271 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:52:08 crc kubenswrapper[4793]: E0217 20:52:08.540394 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:52:19 crc kubenswrapper[4793]: I0217 20:52:19.539121 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:52:19 crc kubenswrapper[4793]: E0217 20:52:19.540102 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:52:20 crc kubenswrapper[4793]: I0217 20:52:20.102080 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:52:20 crc kubenswrapper[4793]: I0217 20:52:20.102152 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:52:34 crc kubenswrapper[4793]: I0217 20:52:34.538638 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:52:34 crc kubenswrapper[4793]: E0217 20:52:34.539460 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.821602 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sk5r9"] Feb 17 20:52:40 crc kubenswrapper[4793]: E0217 20:52:40.822525 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="extract-utilities" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.822541 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="extract-utilities" Feb 17 20:52:40 crc kubenswrapper[4793]: E0217 20:52:40.822581 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="registry-server" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.822590 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="registry-server" Feb 17 20:52:40 crc kubenswrapper[4793]: E0217 20:52:40.822606 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="extract-content" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.822615 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="extract-content" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.822949 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b12c31-2da4-4c53-80b6-a366820c1b15" containerName="registry-server" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.825236 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.849282 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk5r9"] Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.881486 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27skg\" (UniqueName: \"kubernetes.io/projected/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-kube-api-access-27skg\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.881952 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-utilities\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.882204 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-catalog-content\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.983338 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27skg\" (UniqueName: \"kubernetes.io/projected/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-kube-api-access-27skg\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.983427 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-utilities\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.983586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-catalog-content\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.984211 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-catalog-content\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:40 crc kubenswrapper[4793]: I0217 20:52:40.984559 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-utilities\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:41 crc kubenswrapper[4793]: I0217 20:52:41.009103 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27skg\" (UniqueName: \"kubernetes.io/projected/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-kube-api-access-27skg\") pod \"redhat-marketplace-sk5r9\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:41 crc kubenswrapper[4793]: I0217 20:52:41.171138 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:41 crc kubenswrapper[4793]: W0217 20:52:41.683039 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ab68c4c_d1cf_41bd_a950_286aa8f6b952.slice/crio-b76a4a5d03422e44a3916e8de182019221e5ffedd1d5d5c62bd6503f9789b8a3 WatchSource:0}: Error finding container b76a4a5d03422e44a3916e8de182019221e5ffedd1d5d5c62bd6503f9789b8a3: Status 404 returned error can't find the container with id b76a4a5d03422e44a3916e8de182019221e5ffedd1d5d5c62bd6503f9789b8a3 Feb 17 20:52:41 crc kubenswrapper[4793]: I0217 20:52:41.689401 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk5r9"] Feb 17 20:52:42 crc kubenswrapper[4793]: I0217 20:52:42.628334 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerID="6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a" exitCode=0 Feb 17 20:52:42 crc kubenswrapper[4793]: I0217 20:52:42.628394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerDied","Data":"6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a"} Feb 17 20:52:42 crc kubenswrapper[4793]: I0217 20:52:42.628719 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerStarted","Data":"b76a4a5d03422e44a3916e8de182019221e5ffedd1d5d5c62bd6503f9789b8a3"} Feb 17 20:52:42 crc kubenswrapper[4793]: I0217 20:52:42.630815 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:52:43 crc kubenswrapper[4793]: I0217 20:52:43.642353 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerStarted","Data":"20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b"} Feb 17 20:52:44 crc kubenswrapper[4793]: I0217 20:52:44.653803 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerID="20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b" exitCode=0 Feb 17 20:52:44 crc kubenswrapper[4793]: I0217 20:52:44.653873 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerDied","Data":"20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b"} Feb 17 20:52:45 crc kubenswrapper[4793]: I0217 20:52:45.667268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerStarted","Data":"8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05"} Feb 17 20:52:45 crc kubenswrapper[4793]: I0217 20:52:45.702714 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sk5r9" podStartSLOduration=3.279913599 podStartE2EDuration="5.702680301s" podCreationTimestamp="2026-02-17 20:52:40 +0000 UTC" firstStartedPulling="2026-02-17 20:52:42.630414469 +0000 UTC m=+2637.922112820" lastFinishedPulling="2026-02-17 20:52:45.053181181 +0000 UTC m=+2640.344879522" observedRunningTime="2026-02-17 20:52:45.691755052 +0000 UTC m=+2640.983453363" watchObservedRunningTime="2026-02-17 20:52:45.702680301 +0000 UTC m=+2640.994378612" Feb 17 20:52:46 crc kubenswrapper[4793]: I0217 20:52:46.538878 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:52:46 crc kubenswrapper[4793]: E0217 20:52:46.539310 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:52:50 crc kubenswrapper[4793]: I0217 20:52:50.102334 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:52:50 crc kubenswrapper[4793]: I0217 20:52:50.103014 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:52:51 crc kubenswrapper[4793]: I0217 20:52:51.172224 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:51 crc kubenswrapper[4793]: I0217 20:52:51.172671 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:51 crc kubenswrapper[4793]: I0217 20:52:51.265496 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:51 crc kubenswrapper[4793]: I0217 20:52:51.818481 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:51 crc kubenswrapper[4793]: I0217 20:52:51.881103 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk5r9"] Feb 17 20:52:53 crc kubenswrapper[4793]: I0217 20:52:53.764906 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sk5r9" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="registry-server" containerID="cri-o://8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05" gracePeriod=2 Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.325267 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.486920 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-utilities\") pod \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.486974 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27skg\" (UniqueName: \"kubernetes.io/projected/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-kube-api-access-27skg\") pod \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.487019 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-catalog-content\") pod \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\" (UID: \"9ab68c4c-d1cf-41bd-a950-286aa8f6b952\") " Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.487865 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-utilities" (OuterVolumeSpecName: "utilities") pod "9ab68c4c-d1cf-41bd-a950-286aa8f6b952" (UID: "9ab68c4c-d1cf-41bd-a950-286aa8f6b952"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.493137 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-kube-api-access-27skg" (OuterVolumeSpecName: "kube-api-access-27skg") pod "9ab68c4c-d1cf-41bd-a950-286aa8f6b952" (UID: "9ab68c4c-d1cf-41bd-a950-286aa8f6b952"). InnerVolumeSpecName "kube-api-access-27skg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.510186 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ab68c4c-d1cf-41bd-a950-286aa8f6b952" (UID: "9ab68c4c-d1cf-41bd-a950-286aa8f6b952"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.589491 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.589541 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27skg\" (UniqueName: \"kubernetes.io/projected/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-kube-api-access-27skg\") on node \"crc\" DevicePath \"\"" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.589564 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ab68c4c-d1cf-41bd-a950-286aa8f6b952-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.777175 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerID="8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05" exitCode=0 Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.777264 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk5r9" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.777293 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerDied","Data":"8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05"} Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.779954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk5r9" event={"ID":"9ab68c4c-d1cf-41bd-a950-286aa8f6b952","Type":"ContainerDied","Data":"b76a4a5d03422e44a3916e8de182019221e5ffedd1d5d5c62bd6503f9789b8a3"} Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.779997 4793 scope.go:117] "RemoveContainer" containerID="8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.819376 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk5r9"] Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.822231 4793 scope.go:117] "RemoveContainer" containerID="20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.832101 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk5r9"] Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.854326 4793 scope.go:117] "RemoveContainer" containerID="6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.909940 4793 scope.go:117] "RemoveContainer" containerID="8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05" Feb 17 20:52:54 crc kubenswrapper[4793]: E0217 20:52:54.910512 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05\": container with ID starting with 8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05 not found: ID does not exist" containerID="8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.910605 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05"} err="failed to get container status \"8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05\": rpc error: code = NotFound desc = could not find container \"8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05\": container with ID starting with 8645c2cc07cdcdbd2ec21ddc024599a8609a350fd66fe4a7d7395c74888e1e05 not found: ID does not exist" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.910667 4793 scope.go:117] "RemoveContainer" containerID="20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b" Feb 17 20:52:54 crc kubenswrapper[4793]: E0217 20:52:54.911469 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b\": container with ID starting with 20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b not found: ID does not exist" containerID="20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.911509 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b"} err="failed to get container status \"20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b\": rpc error: code = NotFound desc = could not find container \"20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b\": container with ID starting with 20e48b5233549a1a888485ee9e6c0f7cf630b9438f1e71ff560453e41cd6c77b not found: ID does not exist" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.911532 4793 scope.go:117] "RemoveContainer" containerID="6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a" Feb 17 20:52:54 crc kubenswrapper[4793]: E0217 20:52:54.911927 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a\": container with ID starting with 6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a not found: ID does not exist" containerID="6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a" Feb 17 20:52:54 crc kubenswrapper[4793]: I0217 20:52:54.911978 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a"} err="failed to get container status \"6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a\": rpc error: code = NotFound desc = could not find container \"6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a\": container with ID starting with 6a3f10e885d3a2a6d751fcad7b88b91e48dcf0fa3915af317608c1f28353eb1a not found: ID does not exist" Feb 17 20:52:55 crc kubenswrapper[4793]: I0217 20:52:55.556674 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" path="/var/lib/kubelet/pods/9ab68c4c-d1cf-41bd-a950-286aa8f6b952/volumes" Feb 17 20:52:58 crc kubenswrapper[4793]: I0217 20:52:58.538670 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:52:58 crc kubenswrapper[4793]: E0217 20:52:58.539567 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:53:09 crc kubenswrapper[4793]: I0217 20:53:09.540253 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:53:09 crc kubenswrapper[4793]: E0217 20:53:09.541497 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:53:20 crc kubenswrapper[4793]: I0217 20:53:20.102236 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:53:20 crc kubenswrapper[4793]: I0217 20:53:20.102866 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:53:20 crc kubenswrapper[4793]: I0217 20:53:20.102920 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:53:20 crc kubenswrapper[4793]: I0217 20:53:20.103884 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48133be2347bc1cf331fc0563b46c0e45b4e2135a15478a64b9be4152198a949"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:53:20 crc kubenswrapper[4793]: I0217 20:53:20.103955 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://48133be2347bc1cf331fc0563b46c0e45b4e2135a15478a64b9be4152198a949" gracePeriod=600 Feb 17 20:53:21 crc kubenswrapper[4793]: I0217 20:53:21.097679 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="48133be2347bc1cf331fc0563b46c0e45b4e2135a15478a64b9be4152198a949" exitCode=0 Feb 17 20:53:21 crc kubenswrapper[4793]: I0217 20:53:21.097750 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"48133be2347bc1cf331fc0563b46c0e45b4e2135a15478a64b9be4152198a949"} Feb 17 20:53:21 crc kubenswrapper[4793]: I0217 20:53:21.098403 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747"} Feb 17 20:53:21 crc kubenswrapper[4793]: I0217 20:53:21.098446 4793 scope.go:117] "RemoveContainer" containerID="76425a1257bd4fabf1697c0767c8ac50d36876ae6ef7c15ed6a41dc4566da242" Feb 17 20:53:23 crc kubenswrapper[4793]: I0217 20:53:23.539660 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:53:23 crc kubenswrapper[4793]: E0217 20:53:23.540670 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:53:37 crc kubenswrapper[4793]: I0217 20:53:37.539095 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:53:37 crc kubenswrapper[4793]: E0217 20:53:37.540443 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:53:52 crc kubenswrapper[4793]: I0217 20:53:52.539979 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:53:53 crc kubenswrapper[4793]: I0217 20:53:53.559772 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b"} Feb 17 20:53:55 crc kubenswrapper[4793]: I0217 20:53:55.570583 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" exitCode=1 Feb 17 20:53:55 crc kubenswrapper[4793]: I0217 20:53:55.570683 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b"} Feb 17 20:53:55 crc kubenswrapper[4793]: I0217 20:53:55.570957 4793 scope.go:117] "RemoveContainer" containerID="6f99d76a42b8e90ac19bbab5ff2df207abf870bd4fbcdc76cb494de8123014fc" Feb 17 20:53:55 crc kubenswrapper[4793]: I0217 20:53:55.571831 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:53:55 crc kubenswrapper[4793]: E0217 20:53:55.572215 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:53:56 crc kubenswrapper[4793]: I0217 20:53:56.963254 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:53:56 crc kubenswrapper[4793]: I0217 20:53:56.964316 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:53:56 crc kubenswrapper[4793]: E0217 20:53:56.964595 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:54:01 crc kubenswrapper[4793]: I0217 20:54:01.962884 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:54:01 crc kubenswrapper[4793]: I0217 20:54:01.964568 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:54:01 crc kubenswrapper[4793]: I0217 20:54:01.964726 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:54:01 crc kubenswrapper[4793]: I0217 20:54:01.965571 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:54:01 crc kubenswrapper[4793]: E0217 20:54:01.965982 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:54:02 crc kubenswrapper[4793]: I0217 20:54:02.641404 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:54:02 crc kubenswrapper[4793]: E0217 20:54:02.641814 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:54:14 crc kubenswrapper[4793]: I0217 20:54:14.539050 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:54:14 crc kubenswrapper[4793]: E0217 20:54:14.540348 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:54:25 crc kubenswrapper[4793]: I0217 20:54:25.547344 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:54:25 crc kubenswrapper[4793]: E0217 20:54:25.549041 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:54:39 crc kubenswrapper[4793]: I0217 20:54:39.538268 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:54:39 crc kubenswrapper[4793]: E0217 20:54:39.539051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:54:54 crc kubenswrapper[4793]: I0217 20:54:54.538225 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:54:54 crc kubenswrapper[4793]: E0217 20:54:54.538862 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:55:07 crc kubenswrapper[4793]: I0217 20:55:07.539773 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:55:07 crc kubenswrapper[4793]: E0217 20:55:07.540769 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:55:18 crc kubenswrapper[4793]: I0217 20:55:18.539105 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:55:18 crc kubenswrapper[4793]: E0217 20:55:18.540666 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:55:20 crc kubenswrapper[4793]: I0217 20:55:20.101731 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:55:20 crc kubenswrapper[4793]: I0217 20:55:20.102135 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:55:29 crc kubenswrapper[4793]: I0217 20:55:29.539798 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:55:29 crc kubenswrapper[4793]: E0217 20:55:29.540913 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:55:40 crc kubenswrapper[4793]: I0217 20:55:40.538418 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:55:40 crc kubenswrapper[4793]: E0217 20:55:40.539192 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:55:50 crc kubenswrapper[4793]: I0217 20:55:50.104332 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:55:50 crc kubenswrapper[4793]: I0217 20:55:50.105184 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:55:52 crc kubenswrapper[4793]: I0217 20:55:52.539442 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:55:52 crc kubenswrapper[4793]: E0217 20:55:52.540547 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.165930 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nn44c"] Feb 17 20:56:04 crc kubenswrapper[4793]: E0217 20:56:04.167386 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="extract-utilities" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.167586 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="extract-utilities" Feb 17 20:56:04 crc kubenswrapper[4793]: E0217 20:56:04.167615 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="extract-content" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.167629 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="extract-content" Feb 17 20:56:04 crc kubenswrapper[4793]: E0217 20:56:04.167674 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="registry-server" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.167717 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="registry-server" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.168083 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab68c4c-d1cf-41bd-a950-286aa8f6b952" containerName="registry-server" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.170255 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.183429 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn44c"] Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.285718 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-utilities\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.285847 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsjmp\" (UniqueName: \"kubernetes.io/projected/65133409-092b-419b-8ff5-d690bc140300-kube-api-access-lsjmp\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.286040 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-catalog-content\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.387805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-utilities\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.387880 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsjmp\" (UniqueName: \"kubernetes.io/projected/65133409-092b-419b-8ff5-d690bc140300-kube-api-access-lsjmp\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.387962 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-catalog-content\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.388659 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-catalog-content\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.388788 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-utilities\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.410905 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsjmp\" (UniqueName: \"kubernetes.io/projected/65133409-092b-419b-8ff5-d690bc140300-kube-api-access-lsjmp\") pod \"redhat-operators-nn44c\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:04 crc kubenswrapper[4793]: I0217 20:56:04.516712 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:05 crc kubenswrapper[4793]: I0217 20:56:05.038957 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn44c"] Feb 17 20:56:05 crc kubenswrapper[4793]: I0217 20:56:05.983876 4793 generic.go:334] "Generic (PLEG): container finished" podID="65133409-092b-419b-8ff5-d690bc140300" containerID="be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5" exitCode=0 Feb 17 20:56:05 crc kubenswrapper[4793]: I0217 20:56:05.983986 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerDied","Data":"be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5"} Feb 17 20:56:05 crc kubenswrapper[4793]: I0217 20:56:05.984193 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerStarted","Data":"6d6bb8265c585eef73830e9992d1346a934c007a8b64936ade74534169f8c3e1"} Feb 17 20:56:06 crc kubenswrapper[4793]: I0217 20:56:06.539546 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:56:06 crc kubenswrapper[4793]: E0217 20:56:06.540308 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:56:06 crc kubenswrapper[4793]: I0217 20:56:06.997594 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerStarted","Data":"aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad"} Feb 17 20:56:11 crc kubenswrapper[4793]: I0217 20:56:11.045763 4793 generic.go:334] "Generic (PLEG): container finished" podID="65133409-092b-419b-8ff5-d690bc140300" containerID="aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad" exitCode=0 Feb 17 20:56:11 crc kubenswrapper[4793]: I0217 20:56:11.045879 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerDied","Data":"aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad"} Feb 17 20:56:12 crc kubenswrapper[4793]: I0217 20:56:12.063083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerStarted","Data":"0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2"} Feb 17 20:56:12 crc kubenswrapper[4793]: I0217 20:56:12.087462 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nn44c" podStartSLOduration=2.581880547 podStartE2EDuration="8.087441082s" podCreationTimestamp="2026-02-17 20:56:04 +0000 UTC" firstStartedPulling="2026-02-17 20:56:05.985483185 +0000 UTC m=+2841.277181496" lastFinishedPulling="2026-02-17 20:56:11.49104369 +0000 UTC m=+2846.782742031" observedRunningTime="2026-02-17 20:56:12.08329169 +0000 UTC m=+2847.374990021" watchObservedRunningTime="2026-02-17 20:56:12.087441082 +0000 UTC m=+2847.379139413" Feb 17 20:56:14 crc kubenswrapper[4793]: I0217 20:56:14.517114 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:14 crc kubenswrapper[4793]: I0217 20:56:14.517802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:15 crc kubenswrapper[4793]: I0217 20:56:15.574439 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nn44c" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="registry-server" probeResult="failure" output=< Feb 17 20:56:15 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:56:15 crc kubenswrapper[4793]: > Feb 17 20:56:20 crc kubenswrapper[4793]: I0217 20:56:20.101844 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 20:56:20 crc kubenswrapper[4793]: I0217 20:56:20.102741 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 20:56:20 crc kubenswrapper[4793]: I0217 20:56:20.102824 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 20:56:20 crc kubenswrapper[4793]: I0217 20:56:20.104084 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 20:56:20 crc kubenswrapper[4793]: I0217 20:56:20.104195 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" gracePeriod=600 Feb 17 20:56:20 crc kubenswrapper[4793]: E0217 20:56:20.235997 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:56:20 crc kubenswrapper[4793]: I0217 20:56:20.539196 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:56:20 crc kubenswrapper[4793]: E0217 20:56:20.539747 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:56:21 crc kubenswrapper[4793]: I0217 20:56:21.164256 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" exitCode=0 Feb 17 20:56:21 crc kubenswrapper[4793]: I0217 20:56:21.164348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747"} Feb 17 20:56:21 crc kubenswrapper[4793]: I0217 20:56:21.164789 4793 scope.go:117] "RemoveContainer" containerID="48133be2347bc1cf331fc0563b46c0e45b4e2135a15478a64b9be4152198a949" Feb 17 20:56:21 crc kubenswrapper[4793]: I0217 20:56:21.166279 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:56:21 crc kubenswrapper[4793]: E0217 20:56:21.166963 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:56:25 crc kubenswrapper[4793]: I0217 20:56:25.567677 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nn44c" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="registry-server" probeResult="failure" output=< Feb 17 20:56:25 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 20:56:25 crc kubenswrapper[4793]: > Feb 17 20:56:32 crc kubenswrapper[4793]: I0217 20:56:32.539201 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:56:32 crc kubenswrapper[4793]: E0217 20:56:32.540424 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:56:34 crc kubenswrapper[4793]: I0217 20:56:34.538425 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:56:34 crc kubenswrapper[4793]: E0217 20:56:34.539456 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:56:34 crc kubenswrapper[4793]: I0217 20:56:34.576293 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:34 crc kubenswrapper[4793]: I0217 20:56:34.638187 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:35 crc kubenswrapper[4793]: I0217 20:56:35.355229 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn44c"] Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.329665 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nn44c" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="registry-server" containerID="cri-o://0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2" gracePeriod=2 Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.856245 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.869419 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsjmp\" (UniqueName: \"kubernetes.io/projected/65133409-092b-419b-8ff5-d690bc140300-kube-api-access-lsjmp\") pod \"65133409-092b-419b-8ff5-d690bc140300\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.869479 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-catalog-content\") pod \"65133409-092b-419b-8ff5-d690bc140300\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.869717 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-utilities\") pod \"65133409-092b-419b-8ff5-d690bc140300\" (UID: \"65133409-092b-419b-8ff5-d690bc140300\") " Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.870481 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-utilities" (OuterVolumeSpecName: "utilities") pod "65133409-092b-419b-8ff5-d690bc140300" (UID: "65133409-092b-419b-8ff5-d690bc140300"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.881665 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65133409-092b-419b-8ff5-d690bc140300-kube-api-access-lsjmp" (OuterVolumeSpecName: "kube-api-access-lsjmp") pod "65133409-092b-419b-8ff5-d690bc140300" (UID: "65133409-092b-419b-8ff5-d690bc140300"). InnerVolumeSpecName "kube-api-access-lsjmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.972351 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsjmp\" (UniqueName: \"kubernetes.io/projected/65133409-092b-419b-8ff5-d690bc140300-kube-api-access-lsjmp\") on node \"crc\" DevicePath \"\"" Feb 17 20:56:36 crc kubenswrapper[4793]: I0217 20:56:36.972659 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.003633 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65133409-092b-419b-8ff5-d690bc140300" (UID: "65133409-092b-419b-8ff5-d690bc140300"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.074087 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65133409-092b-419b-8ff5-d690bc140300-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.344442 4793 generic.go:334] "Generic (PLEG): container finished" podID="65133409-092b-419b-8ff5-d690bc140300" containerID="0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2" exitCode=0 Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.344503 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerDied","Data":"0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2"} Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.344547 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn44c" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.344575 4793 scope.go:117] "RemoveContainer" containerID="0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.344555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn44c" event={"ID":"65133409-092b-419b-8ff5-d690bc140300","Type":"ContainerDied","Data":"6d6bb8265c585eef73830e9992d1346a934c007a8b64936ade74534169f8c3e1"} Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.378971 4793 scope.go:117] "RemoveContainer" containerID="aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.406252 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn44c"] Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.413835 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nn44c"] Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.443211 4793 scope.go:117] "RemoveContainer" containerID="be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.479888 4793 scope.go:117] "RemoveContainer" containerID="0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2" Feb 17 20:56:37 crc kubenswrapper[4793]: E0217 20:56:37.480422 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2\": container with ID starting with 0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2 not found: ID does not exist" containerID="0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.480464 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2"} err="failed to get container status \"0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2\": rpc error: code = NotFound desc = could not find container \"0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2\": container with ID starting with 0490c45e0cf98744a7df8fcc825a673670a0b4bdd0996df12217222c1f4171a2 not found: ID does not exist" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.480493 4793 scope.go:117] "RemoveContainer" containerID="aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad" Feb 17 20:56:37 crc kubenswrapper[4793]: E0217 20:56:37.480866 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad\": container with ID starting with aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad not found: ID does not exist" containerID="aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.480901 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad"} err="failed to get container status \"aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad\": rpc error: code = NotFound desc = could not find container \"aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad\": container with ID starting with aeda7051930fabe28473a50559df974f5f888ef7dde2bc4aa7b0f27a1a4b17ad not found: ID does not exist" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.480924 4793 scope.go:117] "RemoveContainer" containerID="be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5" Feb 17 20:56:37 crc kubenswrapper[4793]: E0217 20:56:37.481401 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5\": container with ID starting with be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5 not found: ID does not exist" containerID="be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.481473 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5"} err="failed to get container status \"be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5\": rpc error: code = NotFound desc = could not find container \"be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5\": container with ID starting with be672ccadf67c350bf3ed2243ee3c579639f146b003bd9f2ac5b18c97f5aaaa5 not found: ID does not exist" Feb 17 20:56:37 crc kubenswrapper[4793]: I0217 20:56:37.562828 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65133409-092b-419b-8ff5-d690bc140300" path="/var/lib/kubelet/pods/65133409-092b-419b-8ff5-d690bc140300/volumes" Feb 17 20:56:47 crc kubenswrapper[4793]: I0217 20:56:47.538816 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:56:47 crc kubenswrapper[4793]: I0217 20:56:47.540863 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:56:47 crc kubenswrapper[4793]: E0217 20:56:47.541058 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:56:47 crc kubenswrapper[4793]: E0217 20:56:47.541445 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:57:00 crc kubenswrapper[4793]: I0217 20:57:00.538885 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:57:00 crc kubenswrapper[4793]: E0217 20:57:00.540021 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:57:01 crc kubenswrapper[4793]: I0217 20:57:01.539058 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:57:01 crc kubenswrapper[4793]: E0217 20:57:01.539571 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.602650 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w64c2"] Feb 17 20:57:05 crc kubenswrapper[4793]: E0217 20:57:05.604164 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="extract-content" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.604192 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="extract-content" Feb 17 20:57:05 crc kubenswrapper[4793]: E0217 20:57:05.604215 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="registry-server" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.604229 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="registry-server" Feb 17 20:57:05 crc kubenswrapper[4793]: E0217 20:57:05.604308 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="extract-utilities" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.604321 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="extract-utilities" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.604731 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="65133409-092b-419b-8ff5-d690bc140300" containerName="registry-server" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.607796 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.613142 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w64c2"] Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.690032 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-utilities\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.690115 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-catalog-content\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.690330 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcxn\" (UniqueName: \"kubernetes.io/projected/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-kube-api-access-7kcxn\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.792750 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-utilities\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.793583 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-catalog-content\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.793280 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-utilities\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.793892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-catalog-content\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.794436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kcxn\" (UniqueName: \"kubernetes.io/projected/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-kube-api-access-7kcxn\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.837810 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kcxn\" (UniqueName: \"kubernetes.io/projected/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-kube-api-access-7kcxn\") pod \"certified-operators-w64c2\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:05 crc kubenswrapper[4793]: I0217 20:57:05.975070 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:06 crc kubenswrapper[4793]: I0217 20:57:06.455045 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w64c2"] Feb 17 20:57:06 crc kubenswrapper[4793]: I0217 20:57:06.699510 4793 generic.go:334] "Generic (PLEG): container finished" podID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerID="5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4" exitCode=0 Feb 17 20:57:06 crc kubenswrapper[4793]: I0217 20:57:06.699554 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerDied","Data":"5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4"} Feb 17 20:57:06 crc kubenswrapper[4793]: I0217 20:57:06.699583 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerStarted","Data":"0a216a02349f61545de7665a9173577d0d6cd633a10edc39c2f4ff4d991b5b30"} Feb 17 20:57:07 crc kubenswrapper[4793]: I0217 20:57:07.715588 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerStarted","Data":"3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00"} Feb 17 20:57:08 crc kubenswrapper[4793]: I0217 20:57:08.733515 4793 generic.go:334] "Generic (PLEG): container finished" podID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerID="3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00" exitCode=0 Feb 17 20:57:08 crc kubenswrapper[4793]: I0217 20:57:08.733602 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerDied","Data":"3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00"} Feb 17 20:57:09 crc kubenswrapper[4793]: I0217 20:57:09.745312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerStarted","Data":"a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91"} Feb 17 20:57:09 crc kubenswrapper[4793]: I0217 20:57:09.777752 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w64c2" podStartSLOduration=2.301299959 podStartE2EDuration="4.777729663s" podCreationTimestamp="2026-02-17 20:57:05 +0000 UTC" firstStartedPulling="2026-02-17 20:57:06.702433436 +0000 UTC m=+2901.994131747" lastFinishedPulling="2026-02-17 20:57:09.17886313 +0000 UTC m=+2904.470561451" observedRunningTime="2026-02-17 20:57:09.771325825 +0000 UTC m=+2905.063024186" watchObservedRunningTime="2026-02-17 20:57:09.777729663 +0000 UTC m=+2905.069427984" Feb 17 20:57:11 crc kubenswrapper[4793]: I0217 20:57:11.540320 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:57:11 crc kubenswrapper[4793]: E0217 20:57:11.541228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:57:15 crc kubenswrapper[4793]: I0217 20:57:15.976749 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:15 crc kubenswrapper[4793]: I0217 20:57:15.979663 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:16 crc kubenswrapper[4793]: I0217 20:57:16.047830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:16 crc kubenswrapper[4793]: I0217 20:57:16.539514 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:57:16 crc kubenswrapper[4793]: E0217 20:57:16.540148 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:57:16 crc kubenswrapper[4793]: I0217 20:57:16.924372 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:16 crc kubenswrapper[4793]: I0217 20:57:16.999148 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w64c2"] Feb 17 20:57:18 crc kubenswrapper[4793]: I0217 20:57:18.855333 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w64c2" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="registry-server" containerID="cri-o://a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91" gracePeriod=2 Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.362589 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.409650 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-utilities\") pod \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.409721 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-catalog-content\") pod \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.409778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kcxn\" (UniqueName: \"kubernetes.io/projected/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-kube-api-access-7kcxn\") pod \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\" (UID: \"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0\") " Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.410978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-utilities" (OuterVolumeSpecName: "utilities") pod "5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" (UID: "5f4c8a03-1614-4b6a-93b4-75bfe25b9de0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.415231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-kube-api-access-7kcxn" (OuterVolumeSpecName: "kube-api-access-7kcxn") pod "5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" (UID: "5f4c8a03-1614-4b6a-93b4-75bfe25b9de0"). InnerVolumeSpecName "kube-api-access-7kcxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.475023 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" (UID: "5f4c8a03-1614-4b6a-93b4-75bfe25b9de0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.512605 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.512649 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.512761 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kcxn\" (UniqueName: \"kubernetes.io/projected/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0-kube-api-access-7kcxn\") on node \"crc\" DevicePath \"\"" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.871326 4793 generic.go:334] "Generic (PLEG): container finished" podID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerID="a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91" exitCode=0 Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.871392 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerDied","Data":"a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91"} Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.871433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w64c2" event={"ID":"5f4c8a03-1614-4b6a-93b4-75bfe25b9de0","Type":"ContainerDied","Data":"0a216a02349f61545de7665a9173577d0d6cd633a10edc39c2f4ff4d991b5b30"} Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.871462 4793 scope.go:117] "RemoveContainer" containerID="a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.873339 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w64c2" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.912777 4793 scope.go:117] "RemoveContainer" containerID="3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00" Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.915293 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w64c2"] Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.932559 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w64c2"] Feb 17 20:57:19 crc kubenswrapper[4793]: I0217 20:57:19.958779 4793 scope.go:117] "RemoveContainer" containerID="5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4" Feb 17 20:57:20 crc kubenswrapper[4793]: I0217 20:57:20.004429 4793 scope.go:117] "RemoveContainer" containerID="a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91" Feb 17 20:57:20 crc kubenswrapper[4793]: E0217 20:57:20.004994 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91\": container with ID starting with a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91 not found: ID does not exist" containerID="a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91" Feb 17 20:57:20 crc kubenswrapper[4793]: I0217 20:57:20.005072 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91"} err="failed to get container status \"a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91\": rpc error: code = NotFound desc = could not find container \"a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91\": container with ID starting with a59ba6c2547fed9c3040e511351610de0e95ee7987300bf1925e6ea99b8cee91 not found: ID does not exist" Feb 17 20:57:20 crc kubenswrapper[4793]: I0217 20:57:20.005115 4793 scope.go:117] "RemoveContainer" containerID="3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00" Feb 17 20:57:20 crc kubenswrapper[4793]: E0217 20:57:20.005796 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00\": container with ID starting with 3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00 not found: ID does not exist" containerID="3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00" Feb 17 20:57:20 crc kubenswrapper[4793]: I0217 20:57:20.005846 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00"} err="failed to get container status \"3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00\": rpc error: code = NotFound desc = could not find container \"3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00\": container with ID starting with 3fbc6e2ec56f467ed42cec364540bd9c9c5e3cdf1aeca2e7a4a81fa8abcf9d00 not found: ID does not exist" Feb 17 20:57:20 crc kubenswrapper[4793]: I0217 20:57:20.005882 4793 scope.go:117] "RemoveContainer" containerID="5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4" Feb 17 20:57:20 crc kubenswrapper[4793]: E0217 20:57:20.006268 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4\": container with ID starting with 5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4 not found: ID does not exist" containerID="5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4" Feb 17 20:57:20 crc kubenswrapper[4793]: I0217 20:57:20.006299 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4"} err="failed to get container status \"5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4\": rpc error: code = NotFound desc = could not find container \"5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4\": container with ID starting with 5a229c817cee85af408be469d61501338663980222e3e609ed9b0e5d93c9eea4 not found: ID does not exist" Feb 17 20:57:21 crc kubenswrapper[4793]: I0217 20:57:21.549759 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" path="/var/lib/kubelet/pods/5f4c8a03-1614-4b6a-93b4-75bfe25b9de0/volumes" Feb 17 20:57:24 crc kubenswrapper[4793]: I0217 20:57:24.539871 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:57:24 crc kubenswrapper[4793]: E0217 20:57:24.540571 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:57:30 crc kubenswrapper[4793]: I0217 20:57:30.539389 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:57:30 crc kubenswrapper[4793]: E0217 20:57:30.540305 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:57:36 crc kubenswrapper[4793]: I0217 20:57:36.538965 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:57:36 crc kubenswrapper[4793]: E0217 20:57:36.539851 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:57:41 crc kubenswrapper[4793]: I0217 20:57:41.539309 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:57:41 crc kubenswrapper[4793]: E0217 20:57:41.540298 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:57:51 crc kubenswrapper[4793]: I0217 20:57:51.540458 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:57:51 crc kubenswrapper[4793]: E0217 20:57:51.541807 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:57:55 crc kubenswrapper[4793]: I0217 20:57:55.545792 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:57:55 crc kubenswrapper[4793]: E0217 20:57:55.546511 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.503136 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z7gwt"] Feb 17 20:58:03 crc kubenswrapper[4793]: E0217 20:58:03.504365 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="registry-server" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.504390 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="registry-server" Feb 17 20:58:03 crc kubenswrapper[4793]: E0217 20:58:03.504434 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="extract-utilities" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.504447 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="extract-utilities" Feb 17 20:58:03 crc kubenswrapper[4793]: E0217 20:58:03.504492 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="extract-content" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.504504 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="extract-content" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.504986 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4c8a03-1614-4b6a-93b4-75bfe25b9de0" containerName="registry-server" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.508150 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.518063 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z7gwt"] Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.609424 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-catalog-content\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.609625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8xrh\" (UniqueName: \"kubernetes.io/projected/32b8d2fa-3944-4ed1-a5eb-8619919207bc-kube-api-access-x8xrh\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.609789 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-utilities\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.712003 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8xrh\" (UniqueName: \"kubernetes.io/projected/32b8d2fa-3944-4ed1-a5eb-8619919207bc-kube-api-access-x8xrh\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.712123 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-utilities\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.712225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-catalog-content\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.712995 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-catalog-content\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.713377 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-utilities\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.741450 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8xrh\" (UniqueName: \"kubernetes.io/projected/32b8d2fa-3944-4ed1-a5eb-8619919207bc-kube-api-access-x8xrh\") pod \"community-operators-z7gwt\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:03 crc kubenswrapper[4793]: I0217 20:58:03.854982 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:04 crc kubenswrapper[4793]: I0217 20:58:04.453114 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z7gwt"] Feb 17 20:58:04 crc kubenswrapper[4793]: I0217 20:58:04.538456 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:58:04 crc kubenswrapper[4793]: E0217 20:58:04.539197 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:58:04 crc kubenswrapper[4793]: I0217 20:58:04.943076 4793 generic.go:334] "Generic (PLEG): container finished" podID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerID="0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3" exitCode=0 Feb 17 20:58:04 crc kubenswrapper[4793]: I0217 20:58:04.943122 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerDied","Data":"0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3"} Feb 17 20:58:04 crc kubenswrapper[4793]: I0217 20:58:04.943152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerStarted","Data":"32f9c32d9f7a66e7036bb7ed46a2c8272497101bf5e12295ff54c24e01a36e74"} Feb 17 20:58:04 crc kubenswrapper[4793]: I0217 20:58:04.947643 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 20:58:05 crc kubenswrapper[4793]: I0217 20:58:05.954176 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerStarted","Data":"e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322"} Feb 17 20:58:07 crc kubenswrapper[4793]: I0217 20:58:07.983171 4793 generic.go:334] "Generic (PLEG): container finished" podID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerID="e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322" exitCode=0 Feb 17 20:58:07 crc kubenswrapper[4793]: I0217 20:58:07.983224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerDied","Data":"e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322"} Feb 17 20:58:08 crc kubenswrapper[4793]: I0217 20:58:08.539725 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:58:08 crc kubenswrapper[4793]: E0217 20:58:08.540209 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:58:08 crc kubenswrapper[4793]: I0217 20:58:08.998119 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerStarted","Data":"66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357"} Feb 17 20:58:09 crc kubenswrapper[4793]: I0217 20:58:09.033846 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z7gwt" podStartSLOduration=2.576829935 podStartE2EDuration="6.033821323s" podCreationTimestamp="2026-02-17 20:58:03 +0000 UTC" firstStartedPulling="2026-02-17 20:58:04.94698222 +0000 UTC m=+2960.238680571" lastFinishedPulling="2026-02-17 20:58:08.403973608 +0000 UTC m=+2963.695671959" observedRunningTime="2026-02-17 20:58:09.029014475 +0000 UTC m=+2964.320712826" watchObservedRunningTime="2026-02-17 20:58:09.033821323 +0000 UTC m=+2964.325519674" Feb 17 20:58:13 crc kubenswrapper[4793]: I0217 20:58:13.855793 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:13 crc kubenswrapper[4793]: I0217 20:58:13.857789 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:13 crc kubenswrapper[4793]: I0217 20:58:13.960976 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:14 crc kubenswrapper[4793]: I0217 20:58:14.121517 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:14 crc kubenswrapper[4793]: I0217 20:58:14.221328 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z7gwt"] Feb 17 20:58:15 crc kubenswrapper[4793]: I0217 20:58:15.556835 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:58:15 crc kubenswrapper[4793]: E0217 20:58:15.557410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.072369 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z7gwt" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="registry-server" containerID="cri-o://66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357" gracePeriod=2 Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.627829 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.759360 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-catalog-content\") pod \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.759568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-utilities\") pod \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.759705 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8xrh\" (UniqueName: \"kubernetes.io/projected/32b8d2fa-3944-4ed1-a5eb-8619919207bc-kube-api-access-x8xrh\") pod \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\" (UID: \"32b8d2fa-3944-4ed1-a5eb-8619919207bc\") " Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.760350 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-utilities" (OuterVolumeSpecName: "utilities") pod "32b8d2fa-3944-4ed1-a5eb-8619919207bc" (UID: "32b8d2fa-3944-4ed1-a5eb-8619919207bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.773966 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b8d2fa-3944-4ed1-a5eb-8619919207bc-kube-api-access-x8xrh" (OuterVolumeSpecName: "kube-api-access-x8xrh") pod "32b8d2fa-3944-4ed1-a5eb-8619919207bc" (UID: "32b8d2fa-3944-4ed1-a5eb-8619919207bc"). InnerVolumeSpecName "kube-api-access-x8xrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.862859 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8xrh\" (UniqueName: \"kubernetes.io/projected/32b8d2fa-3944-4ed1-a5eb-8619919207bc-kube-api-access-x8xrh\") on node \"crc\" DevicePath \"\"" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.863086 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.901805 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32b8d2fa-3944-4ed1-a5eb-8619919207bc" (UID: "32b8d2fa-3944-4ed1-a5eb-8619919207bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 20:58:16 crc kubenswrapper[4793]: I0217 20:58:16.964630 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b8d2fa-3944-4ed1-a5eb-8619919207bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.088912 4793 generic.go:334] "Generic (PLEG): container finished" podID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerID="66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357" exitCode=0 Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.089241 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7gwt" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.089255 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerDied","Data":"66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357"} Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.089791 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7gwt" event={"ID":"32b8d2fa-3944-4ed1-a5eb-8619919207bc","Type":"ContainerDied","Data":"32f9c32d9f7a66e7036bb7ed46a2c8272497101bf5e12295ff54c24e01a36e74"} Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.089829 4793 scope.go:117] "RemoveContainer" containerID="66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.127187 4793 scope.go:117] "RemoveContainer" containerID="e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.159619 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z7gwt"] Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.168960 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z7gwt"] Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.173108 4793 scope.go:117] "RemoveContainer" containerID="0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.223407 4793 scope.go:117] "RemoveContainer" containerID="66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357" Feb 17 20:58:17 crc kubenswrapper[4793]: E0217 20:58:17.223998 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357\": container with ID starting with 66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357 not found: ID does not exist" containerID="66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.224187 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357"} err="failed to get container status \"66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357\": rpc error: code = NotFound desc = could not find container \"66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357\": container with ID starting with 66fe7d3fd4a0b36ce6ccd59931cf6be2b03817cd1c802bc7e8629bebe3920357 not found: ID does not exist" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.224405 4793 scope.go:117] "RemoveContainer" containerID="e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322" Feb 17 20:58:17 crc kubenswrapper[4793]: E0217 20:58:17.224978 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322\": container with ID starting with e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322 not found: ID does not exist" containerID="e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.225210 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322"} err="failed to get container status \"e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322\": rpc error: code = NotFound desc = could not find container \"e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322\": container with ID starting with e7a17247143d5367fa97c6de5049d5ed9c864ccbf0930b928020b465ee4e2322 not found: ID does not exist" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.225316 4793 scope.go:117] "RemoveContainer" containerID="0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3" Feb 17 20:58:17 crc kubenswrapper[4793]: E0217 20:58:17.226124 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3\": container with ID starting with 0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3 not found: ID does not exist" containerID="0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.226320 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3"} err="failed to get container status \"0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3\": rpc error: code = NotFound desc = could not find container \"0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3\": container with ID starting with 0190f18f84f39948260a4b0ec15794edb517480b262988f51507cc4fa74ffed3 not found: ID does not exist" Feb 17 20:58:17 crc kubenswrapper[4793]: I0217 20:58:17.550112 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" path="/var/lib/kubelet/pods/32b8d2fa-3944-4ed1-a5eb-8619919207bc/volumes" Feb 17 20:58:19 crc kubenswrapper[4793]: I0217 20:58:19.538539 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:58:19 crc kubenswrapper[4793]: E0217 20:58:19.539091 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:58:30 crc kubenswrapper[4793]: I0217 20:58:30.539611 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:58:30 crc kubenswrapper[4793]: E0217 20:58:30.540775 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:58:32 crc kubenswrapper[4793]: I0217 20:58:32.540960 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:58:32 crc kubenswrapper[4793]: E0217 20:58:32.541774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:58:44 crc kubenswrapper[4793]: I0217 20:58:44.539991 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:58:44 crc kubenswrapper[4793]: E0217 20:58:44.541205 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:58:46 crc kubenswrapper[4793]: I0217 20:58:46.539082 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:58:46 crc kubenswrapper[4793]: E0217 20:58:46.539886 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:58:57 crc kubenswrapper[4793]: I0217 20:58:57.538522 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:58:58 crc kubenswrapper[4793]: I0217 20:58:58.540541 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0"} Feb 17 20:58:59 crc kubenswrapper[4793]: I0217 20:58:59.538942 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:58:59 crc kubenswrapper[4793]: E0217 20:58:59.539609 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:59:00 crc kubenswrapper[4793]: I0217 20:59:00.558678 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" exitCode=1 Feb 17 20:59:00 crc kubenswrapper[4793]: I0217 20:59:00.558776 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0"} Feb 17 20:59:00 crc kubenswrapper[4793]: I0217 20:59:00.558859 4793 scope.go:117] "RemoveContainer" containerID="95b948c18605db461183c2c00273f31744bce4f3b032bf8cd00379007b79b89b" Feb 17 20:59:00 crc kubenswrapper[4793]: I0217 20:59:00.559591 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 20:59:00 crc kubenswrapper[4793]: E0217 20:59:00.560117 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:59:01 crc kubenswrapper[4793]: I0217 20:59:01.962744 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:59:01 crc kubenswrapper[4793]: I0217 20:59:01.963113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 20:59:01 crc kubenswrapper[4793]: I0217 20:59:01.963136 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:59:01 crc kubenswrapper[4793]: I0217 20:59:01.963154 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 20:59:01 crc kubenswrapper[4793]: I0217 20:59:01.964056 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 20:59:01 crc kubenswrapper[4793]: E0217 20:59:01.964444 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:59:14 crc kubenswrapper[4793]: I0217 20:59:14.540182 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:59:14 crc kubenswrapper[4793]: E0217 20:59:14.541025 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:59:15 crc kubenswrapper[4793]: I0217 20:59:15.539066 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 20:59:15 crc kubenswrapper[4793]: E0217 20:59:15.539394 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:59:26 crc kubenswrapper[4793]: I0217 20:59:26.539427 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 20:59:26 crc kubenswrapper[4793]: E0217 20:59:26.540549 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:59:27 crc kubenswrapper[4793]: I0217 20:59:27.538667 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:59:27 crc kubenswrapper[4793]: E0217 20:59:27.539241 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:59:38 crc kubenswrapper[4793]: I0217 20:59:38.539355 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:59:38 crc kubenswrapper[4793]: E0217 20:59:38.540835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 20:59:40 crc kubenswrapper[4793]: I0217 20:59:40.540095 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 20:59:40 crc kubenswrapper[4793]: E0217 20:59:40.540886 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:59:51 crc kubenswrapper[4793]: I0217 20:59:51.539820 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 20:59:51 crc kubenswrapper[4793]: E0217 20:59:51.540841 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 20:59:52 crc kubenswrapper[4793]: I0217 20:59:52.542912 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 20:59:52 crc kubenswrapper[4793]: E0217 20:59:52.543612 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.165196 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr"] Feb 17 21:00:00 crc kubenswrapper[4793]: E0217 21:00:00.167739 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="registry-server" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.167813 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="registry-server" Feb 17 21:00:00 crc kubenswrapper[4793]: E0217 21:00:00.167854 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="extract-content" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.167865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="extract-content" Feb 17 21:00:00 crc kubenswrapper[4793]: E0217 21:00:00.167900 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="extract-utilities" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.167911 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="extract-utilities" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.168302 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b8d2fa-3944-4ed1-a5eb-8619919207bc" containerName="registry-server" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.169659 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.182779 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr"] Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.183329 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.183366 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.273706 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-secret-volume\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.273995 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-config-volume\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.274089 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjs2\" (UniqueName: \"kubernetes.io/projected/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-kube-api-access-pxjs2\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.376084 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxjs2\" (UniqueName: \"kubernetes.io/projected/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-kube-api-access-pxjs2\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.376324 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-secret-volume\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.376507 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-config-volume\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.377289 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-config-volume\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.383469 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-secret-volume\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.400284 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxjs2\" (UniqueName: \"kubernetes.io/projected/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-kube-api-access-pxjs2\") pod \"collect-profiles-29522700-kvqnr\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:00 crc kubenswrapper[4793]: I0217 21:00:00.515452 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:01 crc kubenswrapper[4793]: I0217 21:00:01.033465 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr"] Feb 17 21:00:01 crc kubenswrapper[4793]: I0217 21:00:01.232511 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" event={"ID":"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954","Type":"ContainerStarted","Data":"d0c2be443e488121b83ed1192a0c135db74dd16c9577c6f7171518596105af75"} Feb 17 21:00:01 crc kubenswrapper[4793]: I0217 21:00:01.232547 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" event={"ID":"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954","Type":"ContainerStarted","Data":"69537f164f445d8815ace163419460645e4a650de08ea31ed7297d5340ac8ea9"} Feb 17 21:00:01 crc kubenswrapper[4793]: I0217 21:00:01.270513 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" podStartSLOduration=1.270495925 podStartE2EDuration="1.270495925s" podCreationTimestamp="2026-02-17 21:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:00:01.264331474 +0000 UTC m=+3076.556029795" watchObservedRunningTime="2026-02-17 21:00:01.270495925 +0000 UTC m=+3076.562194236" Feb 17 21:00:02 crc kubenswrapper[4793]: I0217 21:00:02.245607 4793 generic.go:334] "Generic (PLEG): container finished" podID="dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" containerID="d0c2be443e488121b83ed1192a0c135db74dd16c9577c6f7171518596105af75" exitCode=0 Feb 17 21:00:02 crc kubenswrapper[4793]: I0217 21:00:02.245756 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" event={"ID":"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954","Type":"ContainerDied","Data":"d0c2be443e488121b83ed1192a0c135db74dd16c9577c6f7171518596105af75"} Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.541340 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:00:03 crc kubenswrapper[4793]: E0217 21:00:03.541925 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.708720 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.858311 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxjs2\" (UniqueName: \"kubernetes.io/projected/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-kube-api-access-pxjs2\") pod \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.858476 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-config-volume\") pod \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.858516 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-secret-volume\") pod \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\" (UID: \"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954\") " Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.859288 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-config-volume" (OuterVolumeSpecName: "config-volume") pod "dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" (UID: "dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.868206 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-kube-api-access-pxjs2" (OuterVolumeSpecName: "kube-api-access-pxjs2") pod "dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" (UID: "dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954"). InnerVolumeSpecName "kube-api-access-pxjs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.868848 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" (UID: "dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.960811 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxjs2\" (UniqueName: \"kubernetes.io/projected/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-kube-api-access-pxjs2\") on node \"crc\" DevicePath \"\"" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.960849 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:00:03 crc kubenswrapper[4793]: I0217 21:00:03.960861 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:00:04 crc kubenswrapper[4793]: I0217 21:00:04.269880 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" event={"ID":"dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954","Type":"ContainerDied","Data":"69537f164f445d8815ace163419460645e4a650de08ea31ed7297d5340ac8ea9"} Feb 17 21:00:04 crc kubenswrapper[4793]: I0217 21:00:04.270170 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69537f164f445d8815ace163419460645e4a650de08ea31ed7297d5340ac8ea9" Feb 17 21:00:04 crc kubenswrapper[4793]: I0217 21:00:04.269936 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr" Feb 17 21:00:04 crc kubenswrapper[4793]: I0217 21:00:04.330918 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq"] Feb 17 21:00:04 crc kubenswrapper[4793]: I0217 21:00:04.337377 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522655-vshxq"] Feb 17 21:00:05 crc kubenswrapper[4793]: I0217 21:00:05.584437 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ff4728-b246-4b6c-ae17-a509146ee214" path="/var/lib/kubelet/pods/78ff4728-b246-4b6c-ae17-a509146ee214/volumes" Feb 17 21:00:06 crc kubenswrapper[4793]: I0217 21:00:06.539106 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:00:06 crc kubenswrapper[4793]: E0217 21:00:06.539416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:00:15 crc kubenswrapper[4793]: I0217 21:00:15.551222 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:00:15 crc kubenswrapper[4793]: E0217 21:00:15.552466 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:00:21 crc kubenswrapper[4793]: I0217 21:00:21.539037 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:00:21 crc kubenswrapper[4793]: E0217 21:00:21.539909 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:00:23 crc kubenswrapper[4793]: I0217 21:00:23.473621 4793 scope.go:117] "RemoveContainer" containerID="c274383356b493a5c808da1818e35b1a1248aeb58c36df86b11924e2d70cdfd7" Feb 17 21:00:28 crc kubenswrapper[4793]: I0217 21:00:28.539314 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:00:28 crc kubenswrapper[4793]: E0217 21:00:28.540517 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:00:32 crc kubenswrapper[4793]: I0217 21:00:32.539303 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:00:32 crc kubenswrapper[4793]: E0217 21:00:32.540137 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:00:43 crc kubenswrapper[4793]: I0217 21:00:43.539193 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:00:43 crc kubenswrapper[4793]: E0217 21:00:43.540340 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:00:46 crc kubenswrapper[4793]: I0217 21:00:46.539535 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:00:46 crc kubenswrapper[4793]: E0217 21:00:46.540453 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:00:55 crc kubenswrapper[4793]: I0217 21:00:55.549949 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:00:55 crc kubenswrapper[4793]: E0217 21:00:55.550952 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:00:59 crc kubenswrapper[4793]: I0217 21:00:59.539112 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:00:59 crc kubenswrapper[4793]: E0217 21:00:59.539804 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.167919 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522701-8dn59"] Feb 17 21:01:00 crc kubenswrapper[4793]: E0217 21:01:00.168882 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" containerName="collect-profiles" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.169012 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" containerName="collect-profiles" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.169336 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" containerName="collect-profiles" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.170280 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.179806 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522701-8dn59"] Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.354297 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-combined-ca-bundle\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.354365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnf2\" (UniqueName: \"kubernetes.io/projected/0c5dd548-1c99-4d67-aec0-c2f4052aed79-kube-api-access-kwnf2\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.354470 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-fernet-keys\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.354562 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-config-data\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.456264 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-fernet-keys\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.456368 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-config-data\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.456437 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-combined-ca-bundle\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.456457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnf2\" (UniqueName: \"kubernetes.io/projected/0c5dd548-1c99-4d67-aec0-c2f4052aed79-kube-api-access-kwnf2\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.463634 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-fernet-keys\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.466945 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-combined-ca-bundle\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.468565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-config-data\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.480285 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnf2\" (UniqueName: \"kubernetes.io/projected/0c5dd548-1c99-4d67-aec0-c2f4052aed79-kube-api-access-kwnf2\") pod \"keystone-cron-29522701-8dn59\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.493942 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:00 crc kubenswrapper[4793]: I0217 21:01:00.992814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522701-8dn59"] Feb 17 21:01:01 crc kubenswrapper[4793]: I0217 21:01:01.884951 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522701-8dn59" event={"ID":"0c5dd548-1c99-4d67-aec0-c2f4052aed79","Type":"ContainerStarted","Data":"d702d0518f857ec9266b7274ad36b1d1f6e881c0b1e4446b16aeea08f50d2465"} Feb 17 21:01:01 crc kubenswrapper[4793]: I0217 21:01:01.885375 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522701-8dn59" event={"ID":"0c5dd548-1c99-4d67-aec0-c2f4052aed79","Type":"ContainerStarted","Data":"7c679a0493a79095d0f784fed98c297481789997bbb087719d33ec632ff3f866"} Feb 17 21:01:01 crc kubenswrapper[4793]: I0217 21:01:01.911954 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522701-8dn59" podStartSLOduration=1.9119315129999999 podStartE2EDuration="1.911931513s" podCreationTimestamp="2026-02-17 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:01:01.90977766 +0000 UTC m=+3137.201476011" watchObservedRunningTime="2026-02-17 21:01:01.911931513 +0000 UTC m=+3137.203629834" Feb 17 21:01:04 crc kubenswrapper[4793]: I0217 21:01:04.918204 4793 generic.go:334] "Generic (PLEG): container finished" podID="0c5dd548-1c99-4d67-aec0-c2f4052aed79" containerID="d702d0518f857ec9266b7274ad36b1d1f6e881c0b1e4446b16aeea08f50d2465" exitCode=0 Feb 17 21:01:04 crc kubenswrapper[4793]: I0217 21:01:04.918288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522701-8dn59" event={"ID":"0c5dd548-1c99-4d67-aec0-c2f4052aed79","Type":"ContainerDied","Data":"d702d0518f857ec9266b7274ad36b1d1f6e881c0b1e4446b16aeea08f50d2465"} Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.298938 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.498955 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnf2\" (UniqueName: \"kubernetes.io/projected/0c5dd548-1c99-4d67-aec0-c2f4052aed79-kube-api-access-kwnf2\") pod \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.499087 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-combined-ca-bundle\") pod \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.499282 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-config-data\") pod \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.499333 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-fernet-keys\") pod \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\" (UID: \"0c5dd548-1c99-4d67-aec0-c2f4052aed79\") " Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.505542 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0c5dd548-1c99-4d67-aec0-c2f4052aed79" (UID: "0c5dd548-1c99-4d67-aec0-c2f4052aed79"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.508227 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5dd548-1c99-4d67-aec0-c2f4052aed79-kube-api-access-kwnf2" (OuterVolumeSpecName: "kube-api-access-kwnf2") pod "0c5dd548-1c99-4d67-aec0-c2f4052aed79" (UID: "0c5dd548-1c99-4d67-aec0-c2f4052aed79"). InnerVolumeSpecName "kube-api-access-kwnf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.535176 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c5dd548-1c99-4d67-aec0-c2f4052aed79" (UID: "0c5dd548-1c99-4d67-aec0-c2f4052aed79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.558118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-config-data" (OuterVolumeSpecName: "config-data") pod "0c5dd548-1c99-4d67-aec0-c2f4052aed79" (UID: "0c5dd548-1c99-4d67-aec0-c2f4052aed79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.602137 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.602171 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.602180 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwnf2\" (UniqueName: \"kubernetes.io/projected/0c5dd548-1c99-4d67-aec0-c2f4052aed79-kube-api-access-kwnf2\") on node \"crc\" DevicePath \"\"" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.602190 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5dd548-1c99-4d67-aec0-c2f4052aed79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.940632 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522701-8dn59" event={"ID":"0c5dd548-1c99-4d67-aec0-c2f4052aed79","Type":"ContainerDied","Data":"7c679a0493a79095d0f784fed98c297481789997bbb087719d33ec632ff3f866"} Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.941047 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c679a0493a79095d0f784fed98c297481789997bbb087719d33ec632ff3f866" Feb 17 21:01:06 crc kubenswrapper[4793]: I0217 21:01:06.940768 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522701-8dn59" Feb 17 21:01:09 crc kubenswrapper[4793]: I0217 21:01:09.538998 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:01:09 crc kubenswrapper[4793]: E0217 21:01:09.539588 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:01:11 crc kubenswrapper[4793]: I0217 21:01:11.538399 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:01:11 crc kubenswrapper[4793]: E0217 21:01:11.539250 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:01:22 crc kubenswrapper[4793]: I0217 21:01:22.540135 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:01:23 crc kubenswrapper[4793]: I0217 21:01:23.140529 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"9df5ee479ef9692a62e5c2da98082da72107bf4c0ed624a84085c35ae2963246"} Feb 17 21:01:23 crc kubenswrapper[4793]: I0217 21:01:23.541667 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:01:23 crc kubenswrapper[4793]: E0217 21:01:23.542399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:01:38 crc kubenswrapper[4793]: I0217 21:01:38.539402 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:01:38 crc kubenswrapper[4793]: E0217 21:01:38.540212 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:01:52 crc kubenswrapper[4793]: I0217 21:01:52.677865 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:01:52 crc kubenswrapper[4793]: E0217 21:01:52.679021 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:02:06 crc kubenswrapper[4793]: I0217 21:02:06.539809 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:02:06 crc kubenswrapper[4793]: E0217 21:02:06.540963 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:02:17 crc kubenswrapper[4793]: I0217 21:02:17.539446 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:02:17 crc kubenswrapper[4793]: E0217 21:02:17.540843 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:02:30 crc kubenswrapper[4793]: I0217 21:02:30.539778 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:02:30 crc kubenswrapper[4793]: E0217 21:02:30.540806 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:02:42 crc kubenswrapper[4793]: I0217 21:02:42.539157 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:02:42 crc kubenswrapper[4793]: E0217 21:02:42.540043 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:02:56 crc kubenswrapper[4793]: I0217 21:02:56.539111 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:02:56 crc kubenswrapper[4793]: E0217 21:02:56.540176 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:03:09 crc kubenswrapper[4793]: I0217 21:03:09.540256 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:03:09 crc kubenswrapper[4793]: E0217 21:03:09.541501 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.585977 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7fn8w"] Feb 17 21:03:11 crc kubenswrapper[4793]: E0217 21:03:11.586960 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5dd548-1c99-4d67-aec0-c2f4052aed79" containerName="keystone-cron" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.586976 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5dd548-1c99-4d67-aec0-c2f4052aed79" containerName="keystone-cron" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.587206 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5dd548-1c99-4d67-aec0-c2f4052aed79" containerName="keystone-cron" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.590184 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.605602 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fn8w"] Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.721777 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-catalog-content\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.721874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-utilities\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.722103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gnp\" (UniqueName: \"kubernetes.io/projected/2eb49f05-3e98-474c-b613-640874dc3e54-kube-api-access-f8gnp\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.824316 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gnp\" (UniqueName: \"kubernetes.io/projected/2eb49f05-3e98-474c-b613-640874dc3e54-kube-api-access-f8gnp\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.824404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-catalog-content\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.824459 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-utilities\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.824951 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-utilities\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.825174 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-catalog-content\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.843743 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gnp\" (UniqueName: \"kubernetes.io/projected/2eb49f05-3e98-474c-b613-640874dc3e54-kube-api-access-f8gnp\") pod \"redhat-marketplace-7fn8w\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:11 crc kubenswrapper[4793]: I0217 21:03:11.930752 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:12 crc kubenswrapper[4793]: I0217 21:03:12.429758 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fn8w"] Feb 17 21:03:13 crc kubenswrapper[4793]: I0217 21:03:13.457619 4793 generic.go:334] "Generic (PLEG): container finished" podID="2eb49f05-3e98-474c-b613-640874dc3e54" containerID="688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5" exitCode=0 Feb 17 21:03:13 crc kubenswrapper[4793]: I0217 21:03:13.457711 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerDied","Data":"688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5"} Feb 17 21:03:13 crc kubenswrapper[4793]: I0217 21:03:13.457935 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerStarted","Data":"aacf6533f8e2f58d368315178efe9b3db9da230b2e4189bdc16daf03f4cb6e1f"} Feb 17 21:03:13 crc kubenswrapper[4793]: I0217 21:03:13.467807 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:03:15 crc kubenswrapper[4793]: I0217 21:03:15.482671 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerStarted","Data":"32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a"} Feb 17 21:03:16 crc kubenswrapper[4793]: I0217 21:03:16.496130 4793 generic.go:334] "Generic (PLEG): container finished" podID="2eb49f05-3e98-474c-b613-640874dc3e54" containerID="32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a" exitCode=0 Feb 17 21:03:16 crc kubenswrapper[4793]: I0217 21:03:16.496322 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerDied","Data":"32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a"} Feb 17 21:03:17 crc kubenswrapper[4793]: I0217 21:03:17.512780 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerStarted","Data":"c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea"} Feb 17 21:03:17 crc kubenswrapper[4793]: I0217 21:03:17.532301 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7fn8w" podStartSLOduration=2.749266655 podStartE2EDuration="6.532278064s" podCreationTimestamp="2026-02-17 21:03:11 +0000 UTC" firstStartedPulling="2026-02-17 21:03:13.467326245 +0000 UTC m=+3268.759024596" lastFinishedPulling="2026-02-17 21:03:17.250337654 +0000 UTC m=+3272.542036005" observedRunningTime="2026-02-17 21:03:17.52726092 +0000 UTC m=+3272.818959231" watchObservedRunningTime="2026-02-17 21:03:17.532278064 +0000 UTC m=+3272.823976395" Feb 17 21:03:21 crc kubenswrapper[4793]: I0217 21:03:21.931416 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:21 crc kubenswrapper[4793]: I0217 21:03:21.931964 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:22 crc kubenswrapper[4793]: I0217 21:03:22.008400 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:22 crc kubenswrapper[4793]: I0217 21:03:22.646118 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:22 crc kubenswrapper[4793]: I0217 21:03:22.698294 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fn8w"] Feb 17 21:03:23 crc kubenswrapper[4793]: I0217 21:03:23.539755 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:03:23 crc kubenswrapper[4793]: E0217 21:03:23.540205 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:03:24 crc kubenswrapper[4793]: I0217 21:03:24.583059 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7fn8w" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="registry-server" containerID="cri-o://c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea" gracePeriod=2 Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.132706 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.159281 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-utilities\") pod \"2eb49f05-3e98-474c-b613-640874dc3e54\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.159375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8gnp\" (UniqueName: \"kubernetes.io/projected/2eb49f05-3e98-474c-b613-640874dc3e54-kube-api-access-f8gnp\") pod \"2eb49f05-3e98-474c-b613-640874dc3e54\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.159505 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-catalog-content\") pod \"2eb49f05-3e98-474c-b613-640874dc3e54\" (UID: \"2eb49f05-3e98-474c-b613-640874dc3e54\") " Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.160956 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-utilities" (OuterVolumeSpecName: "utilities") pod "2eb49f05-3e98-474c-b613-640874dc3e54" (UID: "2eb49f05-3e98-474c-b613-640874dc3e54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.186409 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb49f05-3e98-474c-b613-640874dc3e54-kube-api-access-f8gnp" (OuterVolumeSpecName: "kube-api-access-f8gnp") pod "2eb49f05-3e98-474c-b613-640874dc3e54" (UID: "2eb49f05-3e98-474c-b613-640874dc3e54"). InnerVolumeSpecName "kube-api-access-f8gnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.194253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2eb49f05-3e98-474c-b613-640874dc3e54" (UID: "2eb49f05-3e98-474c-b613-640874dc3e54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.262980 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.263027 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb49f05-3e98-474c-b613-640874dc3e54-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.263043 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8gnp\" (UniqueName: \"kubernetes.io/projected/2eb49f05-3e98-474c-b613-640874dc3e54-kube-api-access-f8gnp\") on node \"crc\" DevicePath \"\"" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.599174 4793 generic.go:334] "Generic (PLEG): container finished" podID="2eb49f05-3e98-474c-b613-640874dc3e54" containerID="c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea" exitCode=0 Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.599240 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerDied","Data":"c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea"} Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.599553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fn8w" event={"ID":"2eb49f05-3e98-474c-b613-640874dc3e54","Type":"ContainerDied","Data":"aacf6533f8e2f58d368315178efe9b3db9da230b2e4189bdc16daf03f4cb6e1f"} Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.599582 4793 scope.go:117] "RemoveContainer" containerID="c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.599369 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fn8w" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.634487 4793 scope.go:117] "RemoveContainer" containerID="32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.635103 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fn8w"] Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.644520 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fn8w"] Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.652525 4793 scope.go:117] "RemoveContainer" containerID="688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.716887 4793 scope.go:117] "RemoveContainer" containerID="c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea" Feb 17 21:03:25 crc kubenswrapper[4793]: E0217 21:03:25.717372 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea\": container with ID starting with c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea not found: ID does not exist" containerID="c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.717410 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea"} err="failed to get container status \"c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea\": rpc error: code = NotFound desc = could not find container \"c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea\": container with ID starting with c354f8f118e8d10aa095532d0f07c10eb6bf34a4b6f89da27d75e473f86703ea not found: ID does not exist" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.717432 4793 scope.go:117] "RemoveContainer" containerID="32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a" Feb 17 21:03:25 crc kubenswrapper[4793]: E0217 21:03:25.717680 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a\": container with ID starting with 32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a not found: ID does not exist" containerID="32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.717706 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a"} err="failed to get container status \"32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a\": rpc error: code = NotFound desc = could not find container \"32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a\": container with ID starting with 32b260c90fdd67d8cefe001ded522893cedc17384899047760b04c9393156b0a not found: ID does not exist" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.717718 4793 scope.go:117] "RemoveContainer" containerID="688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5" Feb 17 21:03:25 crc kubenswrapper[4793]: E0217 21:03:25.717939 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5\": container with ID starting with 688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5 not found: ID does not exist" containerID="688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5" Feb 17 21:03:25 crc kubenswrapper[4793]: I0217 21:03:25.717961 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5"} err="failed to get container status \"688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5\": rpc error: code = NotFound desc = could not find container \"688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5\": container with ID starting with 688f2616168e0a6cc62750e6b85263466edf6d5799c2692ed057ab6378c0cde5 not found: ID does not exist" Feb 17 21:03:27 crc kubenswrapper[4793]: I0217 21:03:27.552581 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" path="/var/lib/kubelet/pods/2eb49f05-3e98-474c-b613-640874dc3e54/volumes" Feb 17 21:03:35 crc kubenswrapper[4793]: I0217 21:03:35.551823 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:03:35 crc kubenswrapper[4793]: E0217 21:03:35.553466 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:03:46 crc kubenswrapper[4793]: I0217 21:03:46.539384 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:03:46 crc kubenswrapper[4793]: E0217 21:03:46.540074 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:03:50 crc kubenswrapper[4793]: I0217 21:03:50.102291 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:03:50 crc kubenswrapper[4793]: I0217 21:03:50.103030 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:03:59 crc kubenswrapper[4793]: I0217 21:03:59.545632 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:03:59 crc kubenswrapper[4793]: E0217 21:03:59.546590 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(06ecbc8e-aa9f-4025-883d-65e4c000d986)\"" pod="openstack/watcher-applier-0" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" Feb 17 21:04:10 crc kubenswrapper[4793]: I0217 21:04:10.539502 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:04:11 crc kubenswrapper[4793]: I0217 21:04:11.083449 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerStarted","Data":"3a06348bf9cf66a29daa14de1578bc3b2a65b24cda4371b3fa27c4291241bcbc"} Feb 17 21:04:11 crc kubenswrapper[4793]: I0217 21:04:11.962832 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:11 crc kubenswrapper[4793]: I0217 21:04:11.962887 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:04:11 crc kubenswrapper[4793]: I0217 21:04:11.995108 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 17 21:04:12 crc kubenswrapper[4793]: I0217 21:04:12.124981 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 17 21:04:12 crc kubenswrapper[4793]: I0217 21:04:12.170219 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.112145 4793 generic.go:334] "Generic (PLEG): container finished" podID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerID="3a06348bf9cf66a29daa14de1578bc3b2a65b24cda4371b3fa27c4291241bcbc" exitCode=1 Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.112197 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"3a06348bf9cf66a29daa14de1578bc3b2a65b24cda4371b3fa27c4291241bcbc"} Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.112604 4793 scope.go:117] "RemoveContainer" containerID="2f49b2ce7fdb91bdb720b464fe6c5a5b25ba65aacf3eb660f70135f8de8e2fc0" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.602358 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.763009 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-config-data\") pod \"06ecbc8e-aa9f-4025-883d-65e4c000d986\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.763071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ecbc8e-aa9f-4025-883d-65e4c000d986-logs\") pod \"06ecbc8e-aa9f-4025-883d-65e4c000d986\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.763107 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmhm9\" (UniqueName: \"kubernetes.io/projected/06ecbc8e-aa9f-4025-883d-65e4c000d986-kube-api-access-nmhm9\") pod \"06ecbc8e-aa9f-4025-883d-65e4c000d986\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.763294 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-combined-ca-bundle\") pod \"06ecbc8e-aa9f-4025-883d-65e4c000d986\" (UID: \"06ecbc8e-aa9f-4025-883d-65e4c000d986\") " Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.763459 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ecbc8e-aa9f-4025-883d-65e4c000d986-logs" (OuterVolumeSpecName: "logs") pod "06ecbc8e-aa9f-4025-883d-65e4c000d986" (UID: "06ecbc8e-aa9f-4025-883d-65e4c000d986"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.763829 4793 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ecbc8e-aa9f-4025-883d-65e4c000d986-logs\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.768118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ecbc8e-aa9f-4025-883d-65e4c000d986-kube-api-access-nmhm9" (OuterVolumeSpecName: "kube-api-access-nmhm9") pod "06ecbc8e-aa9f-4025-883d-65e4c000d986" (UID: "06ecbc8e-aa9f-4025-883d-65e4c000d986"). InnerVolumeSpecName "kube-api-access-nmhm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.801529 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ecbc8e-aa9f-4025-883d-65e4c000d986" (UID: "06ecbc8e-aa9f-4025-883d-65e4c000d986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.821323 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-config-data" (OuterVolumeSpecName: "config-data") pod "06ecbc8e-aa9f-4025-883d-65e4c000d986" (UID: "06ecbc8e-aa9f-4025-883d-65e4c000d986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.865856 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.865885 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmhm9\" (UniqueName: \"kubernetes.io/projected/06ecbc8e-aa9f-4025-883d-65e4c000d986-kube-api-access-nmhm9\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:14 crc kubenswrapper[4793]: I0217 21:04:14.865898 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ecbc8e-aa9f-4025-883d-65e4c000d986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.123857 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"06ecbc8e-aa9f-4025-883d-65e4c000d986","Type":"ContainerDied","Data":"b2292ac6d886eac4ab9a1d439a8f84c5aeab8ebf7f9d482054408ecb24a57950"} Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.123915 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.123921 4793 scope.go:117] "RemoveContainer" containerID="3a06348bf9cf66a29daa14de1578bc3b2a65b24cda4371b3fa27c4291241bcbc" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.239307 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.253576 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.266758 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267230 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267250 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267261 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="extract-utilities" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267268 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="extract-utilities" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267278 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267283 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267293 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267299 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267310 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="registry-server" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267316 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="registry-server" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267339 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267346 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267356 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267363 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267375 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267382 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267395 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267402 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267408 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267413 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267429 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="extract-content" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267435 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="extract-content" Feb 17 21:04:15 crc kubenswrapper[4793]: E0217 21:04:15.267448 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267453 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267614 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267632 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267640 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267650 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267656 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb49f05-3e98-474c-b613-640874dc3e54" containerName="registry-server" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267666 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267673 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267707 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267715 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267722 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267730 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.267741 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.268437 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.272501 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.274407 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.377535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d26164-0fa4-4020-9224-b7760a490987-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.377893 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbz9p\" (UniqueName: \"kubernetes.io/projected/02d26164-0fa4-4020-9224-b7760a490987-kube-api-access-bbz9p\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.377959 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d26164-0fa4-4020-9224-b7760a490987-config-data\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.377990 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d26164-0fa4-4020-9224-b7760a490987-logs\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.479813 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbz9p\" (UniqueName: \"kubernetes.io/projected/02d26164-0fa4-4020-9224-b7760a490987-kube-api-access-bbz9p\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.479926 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d26164-0fa4-4020-9224-b7760a490987-config-data\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.479960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d26164-0fa4-4020-9224-b7760a490987-logs\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.480074 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d26164-0fa4-4020-9224-b7760a490987-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.481387 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d26164-0fa4-4020-9224-b7760a490987-logs\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.485143 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d26164-0fa4-4020-9224-b7760a490987-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.485424 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d26164-0fa4-4020-9224-b7760a490987-config-data\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.499914 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbz9p\" (UniqueName: \"kubernetes.io/projected/02d26164-0fa4-4020-9224-b7760a490987-kube-api-access-bbz9p\") pod \"watcher-applier-0\" (UID: \"02d26164-0fa4-4020-9224-b7760a490987\") " pod="openstack/watcher-applier-0" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.549407 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" path="/var/lib/kubelet/pods/06ecbc8e-aa9f-4025-883d-65e4c000d986/volumes" Feb 17 21:04:15 crc kubenswrapper[4793]: I0217 21:04:15.595636 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 17 21:04:16 crc kubenswrapper[4793]: I0217 21:04:16.085426 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 17 21:04:16 crc kubenswrapper[4793]: I0217 21:04:16.131982 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"e356ad2586d97c9d78435ba68d170b04bc05dc6573ae8c212c9242f95d62a149"} Feb 17 21:04:17 crc kubenswrapper[4793]: I0217 21:04:17.145358 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"08063053eccbfbe77dd48af7c50502cf5d8d796edb80eb172900e09da0f36d9e"} Feb 17 21:04:17 crc kubenswrapper[4793]: I0217 21:04:17.169943 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.169925579 podStartE2EDuration="2.169925579s" podCreationTimestamp="2026-02-17 21:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:04:17.162916056 +0000 UTC m=+3332.454614367" watchObservedRunningTime="2026-02-17 21:04:17.169925579 +0000 UTC m=+3332.461623890" Feb 17 21:04:19 crc kubenswrapper[4793]: I0217 21:04:19.171021 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="08063053eccbfbe77dd48af7c50502cf5d8d796edb80eb172900e09da0f36d9e" exitCode=1 Feb 17 21:04:19 crc kubenswrapper[4793]: I0217 21:04:19.171147 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"08063053eccbfbe77dd48af7c50502cf5d8d796edb80eb172900e09da0f36d9e"} Feb 17 21:04:19 crc kubenswrapper[4793]: I0217 21:04:19.172264 4793 scope.go:117] "RemoveContainer" containerID="08063053eccbfbe77dd48af7c50502cf5d8d796edb80eb172900e09da0f36d9e" Feb 17 21:04:20 crc kubenswrapper[4793]: I0217 21:04:20.019585 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 21:04:20 crc kubenswrapper[4793]: I0217 21:04:20.102133 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:04:20 crc kubenswrapper[4793]: I0217 21:04:20.102194 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:04:20 crc kubenswrapper[4793]: I0217 21:04:20.181152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec"} Feb 17 21:04:20 crc kubenswrapper[4793]: I0217 21:04:20.596134 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:04:21 crc kubenswrapper[4793]: I0217 21:04:21.002643 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 21:04:22 crc kubenswrapper[4793]: I0217 21:04:22.197932 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec" exitCode=1 Feb 17 21:04:22 crc kubenswrapper[4793]: I0217 21:04:22.198019 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec"} Feb 17 21:04:22 crc kubenswrapper[4793]: I0217 21:04:22.198280 4793 scope.go:117] "RemoveContainer" containerID="08063053eccbfbe77dd48af7c50502cf5d8d796edb80eb172900e09da0f36d9e" Feb 17 21:04:22 crc kubenswrapper[4793]: I0217 21:04:22.198937 4793 scope.go:117] "RemoveContainer" containerID="a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec" Feb 17 21:04:22 crc kubenswrapper[4793]: E0217 21:04:22.201378 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:23 crc kubenswrapper[4793]: I0217 21:04:23.569232 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="rabbitmq" containerID="cri-o://e9f214418cb401df68682faa9adc2e5dfeea2a0cb1b5bd8cf604fbbd37e27f98" gracePeriod=604797 Feb 17 21:04:24 crc kubenswrapper[4793]: I0217 21:04:24.194165 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="rabbitmq" containerID="cri-o://2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c" gracePeriod=604797 Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.230247 4793 generic.go:334] "Generic (PLEG): container finished" podID="74e2a040-552e-4736-986f-2abac7315e6a" containerID="e9f214418cb401df68682faa9adc2e5dfeea2a0cb1b5bd8cf604fbbd37e27f98" exitCode=0 Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.230645 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74e2a040-552e-4736-986f-2abac7315e6a","Type":"ContainerDied","Data":"e9f214418cb401df68682faa9adc2e5dfeea2a0cb1b5bd8cf604fbbd37e27f98"} Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.230676 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"74e2a040-552e-4736-986f-2abac7315e6a","Type":"ContainerDied","Data":"f74f2a617ffe0673ac814a61998a9c8fd217a231866f10b2e80a00d1c20c87c5"} Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.230706 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74f2a617ffe0673ac814a61998a9c8fd217a231866f10b2e80a00d1c20c87c5" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.297570 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396495 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-confd\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396547 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-plugins\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396571 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-tls\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-config-data\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396680 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74e2a040-552e-4736-986f-2abac7315e6a-erlang-cookie-secret\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396751 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396778 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74e2a040-552e-4736-986f-2abac7315e6a-pod-info\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396797 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-server-conf\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396835 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-plugins-conf\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396913 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-erlang-cookie\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.396931 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjknx\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-kube-api-access-rjknx\") pod \"74e2a040-552e-4736-986f-2abac7315e6a\" (UID: \"74e2a040-552e-4736-986f-2abac7315e6a\") " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.398584 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.399192 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.399424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.408436 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.412651 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/74e2a040-552e-4736-986f-2abac7315e6a-pod-info" (OuterVolumeSpecName: "pod-info") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.412886 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-kube-api-access-rjknx" (OuterVolumeSpecName: "kube-api-access-rjknx") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "kube-api-access-rjknx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.416180 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e2a040-552e-4736-986f-2abac7315e6a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.431879 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.483419 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-config-data" (OuterVolumeSpecName: "config-data") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499761 4793 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499791 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499801 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjknx\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-kube-api-access-rjknx\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499809 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499817 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499825 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499832 4793 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74e2a040-552e-4736-986f-2abac7315e6a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499854 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.499862 4793 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74e2a040-552e-4736-986f-2abac7315e6a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.551326 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.552158 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-server-conf" (OuterVolumeSpecName: "server-conf") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.596265 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.596307 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.596632 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.596906 4793 scope.go:117] "RemoveContainer" containerID="a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec" Feb 17 21:04:25 crc kubenswrapper[4793]: E0217 21:04:25.597205 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.602439 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.602473 4793 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74e2a040-552e-4736-986f-2abac7315e6a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.611208 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "74e2a040-552e-4736-986f-2abac7315e6a" (UID: "74e2a040-552e-4736-986f-2abac7315e6a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.703910 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74e2a040-552e-4736-986f-2abac7315e6a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:25 crc kubenswrapper[4793]: I0217 21:04:25.934298 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110711 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eaaf278-e1ca-4fbe-ab46-478d8846293d-erlang-cookie-secret\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110833 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eaaf278-e1ca-4fbe-ab46-478d8846293d-pod-info\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110859 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-server-conf\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110887 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tp2\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-kube-api-access-54tp2\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110918 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-plugins\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110957 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-config-data\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.110998 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-confd\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.111015 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-erlang-cookie\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.111062 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.111108 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-tls\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.111146 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-plugins-conf\") pod \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\" (UID: \"9eaaf278-e1ca-4fbe-ab46-478d8846293d\") " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.115448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.118076 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eaaf278-e1ca-4fbe-ab46-478d8846293d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.118988 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.122380 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.125866 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.126854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.129904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-kube-api-access-54tp2" (OuterVolumeSpecName: "kube-api-access-54tp2") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "kube-api-access-54tp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.137657 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9eaaf278-e1ca-4fbe-ab46-478d8846293d-pod-info" (OuterVolumeSpecName: "pod-info") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.152165 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-config-data" (OuterVolumeSpecName: "config-data") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.202204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-server-conf" (OuterVolumeSpecName: "server-conf") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213018 4793 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eaaf278-e1ca-4fbe-ab46-478d8846293d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213077 4793 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eaaf278-e1ca-4fbe-ab46-478d8846293d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213087 4793 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213096 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tp2\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-kube-api-access-54tp2\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213104 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213112 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213122 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213145 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213154 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.213162 4793 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eaaf278-e1ca-4fbe-ab46-478d8846293d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.236673 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9eaaf278-e1ca-4fbe-ab46-478d8846293d" (UID: "9eaaf278-e1ca-4fbe-ab46-478d8846293d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267382 4793 generic.go:334] "Generic (PLEG): container finished" podID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerID="2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c" exitCode=0 Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267467 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267523 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eaaf278-e1ca-4fbe-ab46-478d8846293d","Type":"ContainerDied","Data":"2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c"} Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267554 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267571 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eaaf278-e1ca-4fbe-ab46-478d8846293d","Type":"ContainerDied","Data":"d365add9301fa453f7f5054af2aa5a761315ea75fff22a5f4ac44b6c18a259d4"} Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267592 4793 scope.go:117] "RemoveContainer" containerID="2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.267877 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.275466 4793 scope.go:117] "RemoveContainer" containerID="a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.277182 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.305276 4793 scope.go:117] "RemoveContainer" containerID="2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.314586 4793 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eaaf278-e1ca-4fbe-ab46-478d8846293d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.314610 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.317983 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.330222 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.338582 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.341561 4793 scope.go:117] "RemoveContainer" containerID="2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.344582 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c\": container with ID starting with 2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c not found: ID does not exist" containerID="2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.344772 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c"} err="failed to get container status \"2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c\": rpc error: code = NotFound desc = could not find container \"2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c\": container with ID starting with 2ca6abb240fde91e626ada749897f0a392e721389d81e92af50c16587b06850c not found: ID does not exist" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.344873 4793 scope.go:117] "RemoveContainer" containerID="2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.345984 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.346073 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41\": container with ID starting with 2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41 not found: ID does not exist" containerID="2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.346141 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41"} err="failed to get container status \"2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41\": rpc error: code = NotFound desc = could not find container \"2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41\": container with ID starting with 2e3789ad83d9b9654744085f451c48d2166e66a20fc309b47ee01c24138e1c41 not found: ID does not exist" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.383901 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.384545 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.389728 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.389869 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.389921 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.389976 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="setup-container" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390023 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="setup-container" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.390111 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="rabbitmq" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390169 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="rabbitmq" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.390234 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390282 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.390351 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="setup-container" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390399 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="setup-container" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.390467 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="rabbitmq" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390528 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="rabbitmq" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390871 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.390942 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" containerName="rabbitmq" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.391010 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e2a040-552e-4736-986f-2abac7315e6a" containerName="rabbitmq" Feb 17 21:04:26 crc kubenswrapper[4793]: E0217 21:04:26.391229 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.391282 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.391516 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ecbc8e-aa9f-4025-883d-65e4c000d986" containerName="watcher-applier" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.392309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.399239 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.399595 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-44658" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.399956 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.400038 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.400226 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.400417 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.414749 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.416338 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.426345 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427078 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jx5bq" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427236 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427297 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427390 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427423 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427236 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.427471 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.458013 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.486950 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524616 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524661 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524697 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bf6d755-b67b-421f-8405-350b53e03a92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524720 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tndn\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-kube-api-access-8tndn\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524741 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524933 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524954 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grmvg\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-kube-api-access-grmvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.524980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525030 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525049 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bf6d755-b67b-421f-8405-350b53e03a92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525088 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525104 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525117 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525134 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525149 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525166 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525183 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525203 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.525219 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627094 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627140 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627161 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627185 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bf6d755-b67b-421f-8405-350b53e03a92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627205 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627229 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627245 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627265 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627282 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627298 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627324 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627347 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627403 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627452 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bf6d755-b67b-421f-8405-350b53e03a92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627471 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tndn\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-kube-api-access-8tndn\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627501 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627601 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627621 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grmvg\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-kube-api-access-grmvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.627764 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.628020 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.629023 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.631265 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.631748 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.631960 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.632232 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.632352 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.632641 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.632794 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.634089 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.634436 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bf6d755-b67b-421f-8405-350b53e03a92-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.634651 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.634722 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.635034 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bf6d755-b67b-421f-8405-350b53e03a92-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.635136 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.635350 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.636413 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.636867 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.636948 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bf6d755-b67b-421f-8405-350b53e03a92-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.652324 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grmvg\" (UniqueName: \"kubernetes.io/projected/080e91ed-ca1e-4b2a-948a-7f4e5ded62f6-kube-api-access-grmvg\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.652338 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tndn\" (UniqueName: \"kubernetes.io/projected/3bf6d755-b67b-421f-8405-350b53e03a92-kube-api-access-8tndn\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.663886 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.680240 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"3bf6d755-b67b-421f-8405-350b53e03a92\") " pod="openstack/rabbitmq-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.744264 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:04:26 crc kubenswrapper[4793]: I0217 21:04:26.803604 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 21:04:27 crc kubenswrapper[4793]: I0217 21:04:27.235259 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 21:04:27 crc kubenswrapper[4793]: W0217 21:04:27.252857 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod080e91ed_ca1e_4b2a_948a_7f4e5ded62f6.slice/crio-5949b9d7058f8f5f6fc0f4208a8055875a54955d3966e6caadb70e53ca3e22d5 WatchSource:0}: Error finding container 5949b9d7058f8f5f6fc0f4208a8055875a54955d3966e6caadb70e53ca3e22d5: Status 404 returned error can't find the container with id 5949b9d7058f8f5f6fc0f4208a8055875a54955d3966e6caadb70e53ca3e22d5 Feb 17 21:04:27 crc kubenswrapper[4793]: I0217 21:04:27.284379 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6","Type":"ContainerStarted","Data":"5949b9d7058f8f5f6fc0f4208a8055875a54955d3966e6caadb70e53ca3e22d5"} Feb 17 21:04:27 crc kubenswrapper[4793]: I0217 21:04:27.334422 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 21:04:27 crc kubenswrapper[4793]: W0217 21:04:27.344305 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf6d755_b67b_421f_8405_350b53e03a92.slice/crio-07b4103fc0a21dcd7964995f30938f57aae17538e85ef3706b59973793481fbe WatchSource:0}: Error finding container 07b4103fc0a21dcd7964995f30938f57aae17538e85ef3706b59973793481fbe: Status 404 returned error can't find the container with id 07b4103fc0a21dcd7964995f30938f57aae17538e85ef3706b59973793481fbe Feb 17 21:04:27 crc kubenswrapper[4793]: I0217 21:04:27.548861 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e2a040-552e-4736-986f-2abac7315e6a" path="/var/lib/kubelet/pods/74e2a040-552e-4736-986f-2abac7315e6a/volumes" Feb 17 21:04:27 crc kubenswrapper[4793]: I0217 21:04:27.549907 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eaaf278-e1ca-4fbe-ab46-478d8846293d" path="/var/lib/kubelet/pods/9eaaf278-e1ca-4fbe-ab46-478d8846293d/volumes" Feb 17 21:04:28 crc kubenswrapper[4793]: I0217 21:04:28.295316 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bf6d755-b67b-421f-8405-350b53e03a92","Type":"ContainerStarted","Data":"07b4103fc0a21dcd7964995f30938f57aae17538e85ef3706b59973793481fbe"} Feb 17 21:04:29 crc kubenswrapper[4793]: I0217 21:04:29.306730 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6","Type":"ContainerStarted","Data":"fc1fc89097994a1f72a06f75ea0d83e88bdf3ab4fb1a87d7232b96ed7d7543ef"} Feb 17 21:04:29 crc kubenswrapper[4793]: I0217 21:04:29.309953 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bf6d755-b67b-421f-8405-350b53e03a92","Type":"ContainerStarted","Data":"8085e2c4888a0d1ec6a5d7b25e2dbb41a7df7394bba1fa4a0a9e46d37064bec5"} Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.376413 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfbff7c47-qr4xc"] Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.379917 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.383379 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.398371 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfbff7c47-qr4xc"] Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478119 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-svc\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478194 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-config\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478307 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478370 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-swift-storage-0\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478417 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjpw\" (UniqueName: \"kubernetes.io/projected/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-kube-api-access-pdjpw\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.478641 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580513 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-svc\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580567 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-config\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580600 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580627 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-swift-storage-0\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580645 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580662 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjpw\" (UniqueName: \"kubernetes.io/projected/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-kube-api-access-pdjpw\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.580745 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.581443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-svc\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.581496 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.581720 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-config\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.581726 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.582035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-openstack-edpm-ipam\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.582239 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-swift-storage-0\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.613573 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjpw\" (UniqueName: \"kubernetes.io/projected/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-kube-api-access-pdjpw\") pod \"dnsmasq-dns-7bfbff7c47-qr4xc\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:33 crc kubenswrapper[4793]: I0217 21:04:33.747158 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:34 crc kubenswrapper[4793]: I0217 21:04:34.294095 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfbff7c47-qr4xc"] Feb 17 21:04:34 crc kubenswrapper[4793]: I0217 21:04:34.355877 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" event={"ID":"cda505f6-0edc-4637-9077-a8c2a0ac6dfa","Type":"ContainerStarted","Data":"8536d0273c44b81da22ca3751f649d3179fe3102c493596ea27a2c5e26a0cfa4"} Feb 17 21:04:35 crc kubenswrapper[4793]: I0217 21:04:35.366065 4793 generic.go:334] "Generic (PLEG): container finished" podID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerID="41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04" exitCode=0 Feb 17 21:04:35 crc kubenswrapper[4793]: I0217 21:04:35.366139 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" event={"ID":"cda505f6-0edc-4637-9077-a8c2a0ac6dfa","Type":"ContainerDied","Data":"41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04"} Feb 17 21:04:36 crc kubenswrapper[4793]: I0217 21:04:36.377648 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" event={"ID":"cda505f6-0edc-4637-9077-a8c2a0ac6dfa","Type":"ContainerStarted","Data":"45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836"} Feb 17 21:04:36 crc kubenswrapper[4793]: I0217 21:04:36.378104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:36 crc kubenswrapper[4793]: I0217 21:04:36.407372 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" podStartSLOduration=3.407350741 podStartE2EDuration="3.407350741s" podCreationTimestamp="2026-02-17 21:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:04:36.396037222 +0000 UTC m=+3351.687735523" watchObservedRunningTime="2026-02-17 21:04:36.407350741 +0000 UTC m=+3351.699049062" Feb 17 21:04:36 crc kubenswrapper[4793]: I0217 21:04:36.538396 4793 scope.go:117] "RemoveContainer" containerID="a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec" Feb 17 21:04:37 crc kubenswrapper[4793]: I0217 21:04:37.388197 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553"} Feb 17 21:04:39 crc kubenswrapper[4793]: I0217 21:04:39.409074 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" exitCode=1 Feb 17 21:04:39 crc kubenswrapper[4793]: I0217 21:04:39.409113 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553"} Feb 17 21:04:39 crc kubenswrapper[4793]: I0217 21:04:39.409144 4793 scope.go:117] "RemoveContainer" containerID="a705ce00d5a1695a33e71585d38dc5394c847aa5d41439cb28ee9e97ad9bdfec" Feb 17 21:04:39 crc kubenswrapper[4793]: I0217 21:04:39.409636 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:04:39 crc kubenswrapper[4793]: E0217 21:04:39.409871 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:40 crc kubenswrapper[4793]: I0217 21:04:40.595982 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:04:40 crc kubenswrapper[4793]: I0217 21:04:40.597318 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:04:40 crc kubenswrapper[4793]: E0217 21:04:40.597777 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:43 crc kubenswrapper[4793]: I0217 21:04:43.751080 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:43 crc kubenswrapper[4793]: I0217 21:04:43.845660 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb4f6457c-ffpdz"] Feb 17 21:04:43 crc kubenswrapper[4793]: I0217 21:04:43.845942 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerName="dnsmasq-dns" containerID="cri-o://40377059c240f331d382e77328c9033e9d4a3da20c0da77e8e3b0cf6804a24b0" gracePeriod=10 Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.169524 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77b7b9cf89-q7lv9"] Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.171851 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.180562 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b7b9cf89-q7lv9"] Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311285 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-openstack-edpm-ipam\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311331 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4g8\" (UniqueName: \"kubernetes.io/projected/413f0882-e8fb-47a6-939a-576f0ccc09f2-kube-api-access-ll4g8\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311372 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-dns-swift-storage-0\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311396 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-ovsdbserver-nb\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311413 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-dns-svc\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311468 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-ovsdbserver-sb\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.311522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-config\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413270 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-ovsdbserver-sb\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-config\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413425 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-openstack-edpm-ipam\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413450 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4g8\" (UniqueName: \"kubernetes.io/projected/413f0882-e8fb-47a6-939a-576f0ccc09f2-kube-api-access-ll4g8\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413476 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-dns-swift-storage-0\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413496 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-ovsdbserver-nb\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.413511 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-dns-svc\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.414427 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-dns-svc\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.414529 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-openstack-edpm-ipam\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.415150 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-ovsdbserver-nb\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.415222 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-ovsdbserver-sb\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.415260 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-dns-swift-storage-0\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.415751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/413f0882-e8fb-47a6-939a-576f0ccc09f2-config\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.436250 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4g8\" (UniqueName: \"kubernetes.io/projected/413f0882-e8fb-47a6-939a-576f0ccc09f2-kube-api-access-ll4g8\") pod \"dnsmasq-dns-77b7b9cf89-q7lv9\" (UID: \"413f0882-e8fb-47a6-939a-576f0ccc09f2\") " pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.474216 4793 generic.go:334] "Generic (PLEG): container finished" podID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerID="40377059c240f331d382e77328c9033e9d4a3da20c0da77e8e3b0cf6804a24b0" exitCode=0 Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.474283 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" event={"ID":"24051b7d-6e2d-41f9-b38b-e5afb3937af9","Type":"ContainerDied","Data":"40377059c240f331d382e77328c9033e9d4a3da20c0da77e8e3b0cf6804a24b0"} Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.474311 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" event={"ID":"24051b7d-6e2d-41f9-b38b-e5afb3937af9","Type":"ContainerDied","Data":"50f845e2d811259ab3edda05523a296190dda632e30e329af9a529f90a2d192a"} Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.474322 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f845e2d811259ab3edda05523a296190dda632e30e329af9a529f90a2d192a" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.493343 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.512179 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.615589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-swift-storage-0\") pod \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.615648 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-config\") pod \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.615756 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-svc\") pod \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.615776 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-nb\") pod \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.615895 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-sb\") pod \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.615915 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxw8v\" (UniqueName: \"kubernetes.io/projected/24051b7d-6e2d-41f9-b38b-e5afb3937af9-kube-api-access-vxw8v\") pod \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\" (UID: \"24051b7d-6e2d-41f9-b38b-e5afb3937af9\") " Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.621583 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24051b7d-6e2d-41f9-b38b-e5afb3937af9-kube-api-access-vxw8v" (OuterVolumeSpecName: "kube-api-access-vxw8v") pod "24051b7d-6e2d-41f9-b38b-e5afb3937af9" (UID: "24051b7d-6e2d-41f9-b38b-e5afb3937af9"). InnerVolumeSpecName "kube-api-access-vxw8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.665938 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24051b7d-6e2d-41f9-b38b-e5afb3937af9" (UID: "24051b7d-6e2d-41f9-b38b-e5afb3937af9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.674186 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24051b7d-6e2d-41f9-b38b-e5afb3937af9" (UID: "24051b7d-6e2d-41f9-b38b-e5afb3937af9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.678279 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24051b7d-6e2d-41f9-b38b-e5afb3937af9" (UID: "24051b7d-6e2d-41f9-b38b-e5afb3937af9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.711985 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-config" (OuterVolumeSpecName: "config") pod "24051b7d-6e2d-41f9-b38b-e5afb3937af9" (UID: "24051b7d-6e2d-41f9-b38b-e5afb3937af9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.712168 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24051b7d-6e2d-41f9-b38b-e5afb3937af9" (UID: "24051b7d-6e2d-41f9-b38b-e5afb3937af9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.717672 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.717717 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.717727 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.717736 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxw8v\" (UniqueName: \"kubernetes.io/projected/24051b7d-6e2d-41f9-b38b-e5afb3937af9-kube-api-access-vxw8v\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.717744 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:44 crc kubenswrapper[4793]: I0217 21:04:44.717752 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24051b7d-6e2d-41f9-b38b-e5afb3937af9-config\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.049311 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b7b9cf89-q7lv9"] Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.493303 4793 generic.go:334] "Generic (PLEG): container finished" podID="413f0882-e8fb-47a6-939a-576f0ccc09f2" containerID="0f8d8cdc83f83210ad63f4d0100cace4d2c9c24261e36d9d4ed4e3539823108c" exitCode=0 Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.493343 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" event={"ID":"413f0882-e8fb-47a6-939a-576f0ccc09f2","Type":"ContainerDied","Data":"0f8d8cdc83f83210ad63f4d0100cace4d2c9c24261e36d9d4ed4e3539823108c"} Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.493537 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" event={"ID":"413f0882-e8fb-47a6-939a-576f0ccc09f2","Type":"ContainerStarted","Data":"16c4fe2613ec2d77eae9f1b1632134a6ef6a59f2f80373c3efb2d85f86282dd5"} Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.493554 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb4f6457c-ffpdz" Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.555545 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb4f6457c-ffpdz"] Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.576306 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb4f6457c-ffpdz"] Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.595717 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.595826 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.595920 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:04:45 crc kubenswrapper[4793]: I0217 21:04:45.596787 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:04:45 crc kubenswrapper[4793]: E0217 21:04:45.597185 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:46 crc kubenswrapper[4793]: I0217 21:04:46.504779 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" event={"ID":"413f0882-e8fb-47a6-939a-576f0ccc09f2","Type":"ContainerStarted","Data":"3d0e844e7ac771e310cbe608ef5499800bd372b8a7c0b674dd535aeab6601b74"} Feb 17 21:04:46 crc kubenswrapper[4793]: I0217 21:04:46.505188 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:04:46 crc kubenswrapper[4793]: I0217 21:04:46.505490 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:46 crc kubenswrapper[4793]: E0217 21:04:46.505979 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:04:46 crc kubenswrapper[4793]: I0217 21:04:46.531869 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" podStartSLOduration=2.531849084 podStartE2EDuration="2.531849084s" podCreationTimestamp="2026-02-17 21:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:04:46.521641423 +0000 UTC m=+3361.813339744" watchObservedRunningTime="2026-02-17 21:04:46.531849084 +0000 UTC m=+3361.823547405" Feb 17 21:04:47 crc kubenswrapper[4793]: I0217 21:04:47.563338 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" path="/var/lib/kubelet/pods/24051b7d-6e2d-41f9-b38b-e5afb3937af9/volumes" Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.102050 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.102725 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.102789 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.103902 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9df5ee479ef9692a62e5c2da98082da72107bf4c0ed624a84085c35ae2963246"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.103998 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://9df5ee479ef9692a62e5c2da98082da72107bf4c0ed624a84085c35ae2963246" gracePeriod=600 Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.556153 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="9df5ee479ef9692a62e5c2da98082da72107bf4c0ed624a84085c35ae2963246" exitCode=0 Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.556224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"9df5ee479ef9692a62e5c2da98082da72107bf4c0ed624a84085c35ae2963246"} Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.556673 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328"} Feb 17 21:04:50 crc kubenswrapper[4793]: I0217 21:04:50.556745 4793 scope.go:117] "RemoveContainer" containerID="112ebd7a8b03931981411cafe86346aa245df9facfa9f0e7e930cbc0e71b7747" Feb 17 21:04:54 crc kubenswrapper[4793]: I0217 21:04:54.496139 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77b7b9cf89-q7lv9" Feb 17 21:04:54 crc kubenswrapper[4793]: I0217 21:04:54.555640 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfbff7c47-qr4xc"] Feb 17 21:04:54 crc kubenswrapper[4793]: I0217 21:04:54.555910 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerName="dnsmasq-dns" containerID="cri-o://45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836" gracePeriod=10 Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.072202 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.252497 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-svc\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.253007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-openstack-edpm-ipam\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.253167 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-nb\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.253319 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-sb\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.253420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-config\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.253736 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdjpw\" (UniqueName: \"kubernetes.io/projected/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-kube-api-access-pdjpw\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.253851 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-swift-storage-0\") pod \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\" (UID: \"cda505f6-0edc-4637-9077-a8c2a0ac6dfa\") " Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.262157 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-kube-api-access-pdjpw" (OuterVolumeSpecName: "kube-api-access-pdjpw") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "kube-api-access-pdjpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.313474 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.314787 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.315513 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-config" (OuterVolumeSpecName: "config") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.321042 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.324445 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.352619 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "cda505f6-0edc-4637-9077-a8c2a0ac6dfa" (UID: "cda505f6-0edc-4637-9077-a8c2a0ac6dfa"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356796 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdjpw\" (UniqueName: \"kubernetes.io/projected/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-kube-api-access-pdjpw\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356828 4793 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356840 4793 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356850 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356859 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356867 4793 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.356875 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda505f6-0edc-4637-9077-a8c2a0ac6dfa-config\") on node \"crc\" DevicePath \"\"" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.622457 4793 generic.go:334] "Generic (PLEG): container finished" podID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerID="45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836" exitCode=0 Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.622564 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.622572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" event={"ID":"cda505f6-0edc-4637-9077-a8c2a0ac6dfa","Type":"ContainerDied","Data":"45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836"} Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.623790 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfbff7c47-qr4xc" event={"ID":"cda505f6-0edc-4637-9077-a8c2a0ac6dfa","Type":"ContainerDied","Data":"8536d0273c44b81da22ca3751f649d3179fe3102c493596ea27a2c5e26a0cfa4"} Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.623840 4793 scope.go:117] "RemoveContainer" containerID="45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.656888 4793 scope.go:117] "RemoveContainer" containerID="41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.660635 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfbff7c47-qr4xc"] Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.671874 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfbff7c47-qr4xc"] Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.696238 4793 scope.go:117] "RemoveContainer" containerID="45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836" Feb 17 21:04:55 crc kubenswrapper[4793]: E0217 21:04:55.697015 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836\": container with ID starting with 45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836 not found: ID does not exist" containerID="45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.697088 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836"} err="failed to get container status \"45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836\": rpc error: code = NotFound desc = could not find container \"45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836\": container with ID starting with 45e87588f7224b38c1fefca6f4fd94822eedbb93c2d5e357fbd51e0edb80f836 not found: ID does not exist" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.697132 4793 scope.go:117] "RemoveContainer" containerID="41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04" Feb 17 21:04:55 crc kubenswrapper[4793]: E0217 21:04:55.697666 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04\": container with ID starting with 41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04 not found: ID does not exist" containerID="41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04" Feb 17 21:04:55 crc kubenswrapper[4793]: I0217 21:04:55.697787 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04"} err="failed to get container status \"41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04\": rpc error: code = NotFound desc = could not find container \"41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04\": container with ID starting with 41d985cbda58f85244e0121830e1ed162ce2505f030c31cd2acc0c2785f11c04 not found: ID does not exist" Feb 17 21:04:57 crc kubenswrapper[4793]: I0217 21:04:57.555495 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" path="/var/lib/kubelet/pods/cda505f6-0edc-4637-9077-a8c2a0ac6dfa/volumes" Feb 17 21:04:58 crc kubenswrapper[4793]: I0217 21:04:58.540300 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:04:58 crc kubenswrapper[4793]: E0217 21:04:58.540989 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:05:01 crc kubenswrapper[4793]: I0217 21:05:01.699517 4793 generic.go:334] "Generic (PLEG): container finished" podID="080e91ed-ca1e-4b2a-948a-7f4e5ded62f6" containerID="fc1fc89097994a1f72a06f75ea0d83e88bdf3ab4fb1a87d7232b96ed7d7543ef" exitCode=0 Feb 17 21:05:01 crc kubenswrapper[4793]: I0217 21:05:01.702830 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6","Type":"ContainerDied","Data":"fc1fc89097994a1f72a06f75ea0d83e88bdf3ab4fb1a87d7232b96ed7d7543ef"} Feb 17 21:05:01 crc kubenswrapper[4793]: I0217 21:05:01.720082 4793 generic.go:334] "Generic (PLEG): container finished" podID="3bf6d755-b67b-421f-8405-350b53e03a92" containerID="8085e2c4888a0d1ec6a5d7b25e2dbb41a7df7394bba1fa4a0a9e46d37064bec5" exitCode=0 Feb 17 21:05:01 crc kubenswrapper[4793]: I0217 21:05:01.720154 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bf6d755-b67b-421f-8405-350b53e03a92","Type":"ContainerDied","Data":"8085e2c4888a0d1ec6a5d7b25e2dbb41a7df7394bba1fa4a0a9e46d37064bec5"} Feb 17 21:05:02 crc kubenswrapper[4793]: I0217 21:05:02.728847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bf6d755-b67b-421f-8405-350b53e03a92","Type":"ContainerStarted","Data":"314402d73076825cc2a052e5e5e09201f492daf2c4b1dd6dec53a087c84530a8"} Feb 17 21:05:02 crc kubenswrapper[4793]: I0217 21:05:02.729250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 21:05:02 crc kubenswrapper[4793]: I0217 21:05:02.731482 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"080e91ed-ca1e-4b2a-948a-7f4e5ded62f6","Type":"ContainerStarted","Data":"10de4e6a52bd2ed539088fefb3ae33d37d30d7882aae9550cb17a2b211ab4bb4"} Feb 17 21:05:02 crc kubenswrapper[4793]: I0217 21:05:02.731645 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:05:02 crc kubenswrapper[4793]: I0217 21:05:02.776336 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.776312112 podStartE2EDuration="36.776312112s" podCreationTimestamp="2026-02-17 21:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:05:02.769184996 +0000 UTC m=+3378.060883317" watchObservedRunningTime="2026-02-17 21:05:02.776312112 +0000 UTC m=+3378.068010423" Feb 17 21:05:02 crc kubenswrapper[4793]: I0217 21:05:02.805993 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.805974253 podStartE2EDuration="36.805974253s" podCreationTimestamp="2026-02-17 21:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:05:02.805746017 +0000 UTC m=+3378.097444328" watchObservedRunningTime="2026-02-17 21:05:02.805974253 +0000 UTC m=+3378.097672574" Feb 17 21:05:11 crc kubenswrapper[4793]: I0217 21:05:11.539132 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:05:11 crc kubenswrapper[4793]: I0217 21:05:11.814785 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d"} Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545242 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr"] Feb 17 21:05:12 crc kubenswrapper[4793]: E0217 21:05:12.545659 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerName="dnsmasq-dns" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545673 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerName="dnsmasq-dns" Feb 17 21:05:12 crc kubenswrapper[4793]: E0217 21:05:12.545710 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerName="init" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545720 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerName="init" Feb 17 21:05:12 crc kubenswrapper[4793]: E0217 21:05:12.545735 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerName="dnsmasq-dns" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545744 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerName="dnsmasq-dns" Feb 17 21:05:12 crc kubenswrapper[4793]: E0217 21:05:12.545764 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerName="init" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545771 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerName="init" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545974 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda505f6-0edc-4637-9077-a8c2a0ac6dfa" containerName="dnsmasq-dns" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.545991 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="24051b7d-6e2d-41f9-b38b-e5afb3937af9" containerName="dnsmasq-dns" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.546787 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.548884 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.548980 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.549146 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.549238 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.568216 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr"] Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.734384 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.734452 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhvc\" (UniqueName: \"kubernetes.io/projected/1be07a95-98d1-4655-a7d8-f851afe8a947-kube-api-access-mfhvc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.734624 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.734762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.836912 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.837311 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfhvc\" (UniqueName: \"kubernetes.io/projected/1be07a95-98d1-4655-a7d8-f851afe8a947-kube-api-access-mfhvc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.837390 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.837436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.848550 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.848660 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.858824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.860852 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfhvc\" (UniqueName: \"kubernetes.io/projected/1be07a95-98d1-4655-a7d8-f851afe8a947-kube-api-access-mfhvc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-njncr\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:12 crc kubenswrapper[4793]: I0217 21:05:12.870783 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:13 crc kubenswrapper[4793]: I0217 21:05:13.482699 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr"] Feb 17 21:05:13 crc kubenswrapper[4793]: W0217 21:05:13.488860 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be07a95_98d1_4655_a7d8_f851afe8a947.slice/crio-7ed3511c48d56c566fcc9f079f612579ec50adba0a350a8da88d3b9cb505a5ad WatchSource:0}: Error finding container 7ed3511c48d56c566fcc9f079f612579ec50adba0a350a8da88d3b9cb505a5ad: Status 404 returned error can't find the container with id 7ed3511c48d56c566fcc9f079f612579ec50adba0a350a8da88d3b9cb505a5ad Feb 17 21:05:13 crc kubenswrapper[4793]: I0217 21:05:13.834424 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" event={"ID":"1be07a95-98d1-4655-a7d8-f851afe8a947","Type":"ContainerStarted","Data":"7ed3511c48d56c566fcc9f079f612579ec50adba0a350a8da88d3b9cb505a5ad"} Feb 17 21:05:14 crc kubenswrapper[4793]: I0217 21:05:14.845744 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" exitCode=1 Feb 17 21:05:14 crc kubenswrapper[4793]: I0217 21:05:14.845804 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d"} Feb 17 21:05:14 crc kubenswrapper[4793]: I0217 21:05:14.845850 4793 scope.go:117] "RemoveContainer" containerID="851a91d22dd4a0bffdc7c49c5d6609adb09c67368cc712f19b587c0f24ee1553" Feb 17 21:05:14 crc kubenswrapper[4793]: I0217 21:05:14.846757 4793 scope.go:117] "RemoveContainer" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" Feb 17 21:05:14 crc kubenswrapper[4793]: E0217 21:05:14.847137 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:05:15 crc kubenswrapper[4793]: I0217 21:05:15.595855 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:05:15 crc kubenswrapper[4793]: I0217 21:05:15.596200 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:05:15 crc kubenswrapper[4793]: I0217 21:05:15.596211 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:05:15 crc kubenswrapper[4793]: I0217 21:05:15.596223 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:05:15 crc kubenswrapper[4793]: I0217 21:05:15.857469 4793 scope.go:117] "RemoveContainer" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" Feb 17 21:05:15 crc kubenswrapper[4793]: E0217 21:05:15.857721 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:05:16 crc kubenswrapper[4793]: I0217 21:05:16.749030 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 21:05:16 crc kubenswrapper[4793]: I0217 21:05:16.814972 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 21:05:23 crc kubenswrapper[4793]: I0217 21:05:23.689647 4793 scope.go:117] "RemoveContainer" containerID="40377059c240f331d382e77328c9033e9d4a3da20c0da77e8e3b0cf6804a24b0" Feb 17 21:05:23 crc kubenswrapper[4793]: I0217 21:05:23.719528 4793 scope.go:117] "RemoveContainer" containerID="e9f214418cb401df68682faa9adc2e5dfeea2a0cb1b5bd8cf604fbbd37e27f98" Feb 17 21:05:23 crc kubenswrapper[4793]: I0217 21:05:23.745232 4793 scope.go:117] "RemoveContainer" containerID="730d2c79651f19c8eaeeb42546c4e843d9972999df7621c641d7093cb41bc2f8" Feb 17 21:05:23 crc kubenswrapper[4793]: I0217 21:05:23.771087 4793 scope.go:117] "RemoveContainer" containerID="230e80f95b84e476421f8af18e8658b8d127727464a442cc06d1ad632b99e031" Feb 17 21:05:23 crc kubenswrapper[4793]: I0217 21:05:23.941619 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" event={"ID":"1be07a95-98d1-4655-a7d8-f851afe8a947","Type":"ContainerStarted","Data":"7f8ca0c070a77aae44b84e2ef7e8213ebf0e79968b2b15cba0df45e00602ef2f"} Feb 17 21:05:23 crc kubenswrapper[4793]: I0217 21:05:23.972560 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" podStartSLOduration=2.204950302 podStartE2EDuration="11.972530678s" podCreationTimestamp="2026-02-17 21:05:12 +0000 UTC" firstStartedPulling="2026-02-17 21:05:13.491442975 +0000 UTC m=+3388.783141296" lastFinishedPulling="2026-02-17 21:05:23.259023331 +0000 UTC m=+3398.550721672" observedRunningTime="2026-02-17 21:05:23.96813604 +0000 UTC m=+3399.259834381" watchObservedRunningTime="2026-02-17 21:05:23.972530678 +0000 UTC m=+3399.264229029" Feb 17 21:05:30 crc kubenswrapper[4793]: I0217 21:05:30.540042 4793 scope.go:117] "RemoveContainer" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" Feb 17 21:05:30 crc kubenswrapper[4793]: E0217 21:05:30.541042 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:05:35 crc kubenswrapper[4793]: I0217 21:05:35.067629 4793 generic.go:334] "Generic (PLEG): container finished" podID="1be07a95-98d1-4655-a7d8-f851afe8a947" containerID="7f8ca0c070a77aae44b84e2ef7e8213ebf0e79968b2b15cba0df45e00602ef2f" exitCode=0 Feb 17 21:05:35 crc kubenswrapper[4793]: I0217 21:05:35.067768 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" event={"ID":"1be07a95-98d1-4655-a7d8-f851afe8a947","Type":"ContainerDied","Data":"7f8ca0c070a77aae44b84e2ef7e8213ebf0e79968b2b15cba0df45e00602ef2f"} Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.641364 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.777042 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-ssh-key-openstack-edpm-ipam\") pod \"1be07a95-98d1-4655-a7d8-f851afe8a947\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.777121 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-inventory\") pod \"1be07a95-98d1-4655-a7d8-f851afe8a947\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.777296 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfhvc\" (UniqueName: \"kubernetes.io/projected/1be07a95-98d1-4655-a7d8-f851afe8a947-kube-api-access-mfhvc\") pod \"1be07a95-98d1-4655-a7d8-f851afe8a947\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.777347 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-repo-setup-combined-ca-bundle\") pod \"1be07a95-98d1-4655-a7d8-f851afe8a947\" (UID: \"1be07a95-98d1-4655-a7d8-f851afe8a947\") " Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.784136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1be07a95-98d1-4655-a7d8-f851afe8a947" (UID: "1be07a95-98d1-4655-a7d8-f851afe8a947"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.792929 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be07a95-98d1-4655-a7d8-f851afe8a947-kube-api-access-mfhvc" (OuterVolumeSpecName: "kube-api-access-mfhvc") pod "1be07a95-98d1-4655-a7d8-f851afe8a947" (UID: "1be07a95-98d1-4655-a7d8-f851afe8a947"). InnerVolumeSpecName "kube-api-access-mfhvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.814110 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1be07a95-98d1-4655-a7d8-f851afe8a947" (UID: "1be07a95-98d1-4655-a7d8-f851afe8a947"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.814798 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-inventory" (OuterVolumeSpecName: "inventory") pod "1be07a95-98d1-4655-a7d8-f851afe8a947" (UID: "1be07a95-98d1-4655-a7d8-f851afe8a947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.879654 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.879707 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.879721 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfhvc\" (UniqueName: \"kubernetes.io/projected/1be07a95-98d1-4655-a7d8-f851afe8a947-kube-api-access-mfhvc\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:36 crc kubenswrapper[4793]: I0217 21:05:36.879734 4793 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be07a95-98d1-4655-a7d8-f851afe8a947-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.095630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" event={"ID":"1be07a95-98d1-4655-a7d8-f851afe8a947","Type":"ContainerDied","Data":"7ed3511c48d56c566fcc9f079f612579ec50adba0a350a8da88d3b9cb505a5ad"} Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.095757 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ed3511c48d56c566fcc9f079f612579ec50adba0a350a8da88d3b9cb505a5ad" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.095811 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-njncr" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.236203 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw"] Feb 17 21:05:37 crc kubenswrapper[4793]: E0217 21:05:37.236836 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be07a95-98d1-4655-a7d8-f851afe8a947" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.236860 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be07a95-98d1-4655-a7d8-f851afe8a947" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.237521 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be07a95-98d1-4655-a7d8-f851afe8a947" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.238618 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.241795 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.242059 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.242139 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.243960 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.253440 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw"] Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.389017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvpw\" (UniqueName: \"kubernetes.io/projected/b0a9d8f0-3566-420a-b070-5a86b798dbee-kube-api-access-qmvpw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.389219 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.389323 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.491056 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.491176 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.491293 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvpw\" (UniqueName: \"kubernetes.io/projected/b0a9d8f0-3566-420a-b070-5a86b798dbee-kube-api-access-qmvpw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.500164 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.501468 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.508148 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvpw\" (UniqueName: \"kubernetes.io/projected/b0a9d8f0-3566-420a-b070-5a86b798dbee-kube-api-access-qmvpw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d5hnw\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:37 crc kubenswrapper[4793]: I0217 21:05:37.573064 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:38 crc kubenswrapper[4793]: I0217 21:05:38.123185 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw"] Feb 17 21:05:39 crc kubenswrapper[4793]: I0217 21:05:39.130574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" event={"ID":"b0a9d8f0-3566-420a-b070-5a86b798dbee","Type":"ContainerStarted","Data":"08d8d44c346e8b30e7bab3380a1012cd1816ff264b43726916377de0cb1d3438"} Feb 17 21:05:39 crc kubenswrapper[4793]: I0217 21:05:39.130913 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" event={"ID":"b0a9d8f0-3566-420a-b070-5a86b798dbee","Type":"ContainerStarted","Data":"2d22c106601a7f9fcb566e6320b1204982326937774178c152e82d2595f67b48"} Feb 17 21:05:39 crc kubenswrapper[4793]: I0217 21:05:39.170854 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" podStartSLOduration=1.7022722080000001 podStartE2EDuration="2.170831038s" podCreationTimestamp="2026-02-17 21:05:37 +0000 UTC" firstStartedPulling="2026-02-17 21:05:38.12383687 +0000 UTC m=+3413.415535221" lastFinishedPulling="2026-02-17 21:05:38.5923957 +0000 UTC m=+3413.884094051" observedRunningTime="2026-02-17 21:05:39.153616304 +0000 UTC m=+3414.445314655" watchObservedRunningTime="2026-02-17 21:05:39.170831038 +0000 UTC m=+3414.462529359" Feb 17 21:05:42 crc kubenswrapper[4793]: I0217 21:05:42.171056 4793 generic.go:334] "Generic (PLEG): container finished" podID="b0a9d8f0-3566-420a-b070-5a86b798dbee" containerID="08d8d44c346e8b30e7bab3380a1012cd1816ff264b43726916377de0cb1d3438" exitCode=0 Feb 17 21:05:42 crc kubenswrapper[4793]: I0217 21:05:42.171160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" event={"ID":"b0a9d8f0-3566-420a-b070-5a86b798dbee","Type":"ContainerDied","Data":"08d8d44c346e8b30e7bab3380a1012cd1816ff264b43726916377de0cb1d3438"} Feb 17 21:05:42 crc kubenswrapper[4793]: I0217 21:05:42.538251 4793 scope.go:117] "RemoveContainer" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" Feb 17 21:05:42 crc kubenswrapper[4793]: E0217 21:05:42.538496 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.636822 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.734248 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-ssh-key-openstack-edpm-ipam\") pod \"b0a9d8f0-3566-420a-b070-5a86b798dbee\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.734420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-inventory\") pod \"b0a9d8f0-3566-420a-b070-5a86b798dbee\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.734458 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmvpw\" (UniqueName: \"kubernetes.io/projected/b0a9d8f0-3566-420a-b070-5a86b798dbee-kube-api-access-qmvpw\") pod \"b0a9d8f0-3566-420a-b070-5a86b798dbee\" (UID: \"b0a9d8f0-3566-420a-b070-5a86b798dbee\") " Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.746414 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a9d8f0-3566-420a-b070-5a86b798dbee-kube-api-access-qmvpw" (OuterVolumeSpecName: "kube-api-access-qmvpw") pod "b0a9d8f0-3566-420a-b070-5a86b798dbee" (UID: "b0a9d8f0-3566-420a-b070-5a86b798dbee"). InnerVolumeSpecName "kube-api-access-qmvpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.761767 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-inventory" (OuterVolumeSpecName: "inventory") pod "b0a9d8f0-3566-420a-b070-5a86b798dbee" (UID: "b0a9d8f0-3566-420a-b070-5a86b798dbee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.762103 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b0a9d8f0-3566-420a-b070-5a86b798dbee" (UID: "b0a9d8f0-3566-420a-b070-5a86b798dbee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.837438 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.837526 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0a9d8f0-3566-420a-b070-5a86b798dbee-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:43 crc kubenswrapper[4793]: I0217 21:05:43.837545 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmvpw\" (UniqueName: \"kubernetes.io/projected/b0a9d8f0-3566-420a-b070-5a86b798dbee-kube-api-access-qmvpw\") on node \"crc\" DevicePath \"\"" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.199100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" event={"ID":"b0a9d8f0-3566-420a-b070-5a86b798dbee","Type":"ContainerDied","Data":"2d22c106601a7f9fcb566e6320b1204982326937774178c152e82d2595f67b48"} Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.199160 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d22c106601a7f9fcb566e6320b1204982326937774178c152e82d2595f67b48" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.199243 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d5hnw" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.327796 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4"] Feb 17 21:05:44 crc kubenswrapper[4793]: E0217 21:05:44.328254 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a9d8f0-3566-420a-b070-5a86b798dbee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.328274 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a9d8f0-3566-420a-b070-5a86b798dbee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.328506 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a9d8f0-3566-420a-b070-5a86b798dbee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.329300 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.333252 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.333513 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.333921 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.333922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.337643 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4"] Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.454528 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.454604 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.454771 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.454801 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxhd\" (UniqueName: \"kubernetes.io/projected/1d07405f-37a4-410c-b7bf-ab35fc791d56-kube-api-access-bwxhd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.557039 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.557144 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.557261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.557295 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxhd\" (UniqueName: \"kubernetes.io/projected/1d07405f-37a4-410c-b7bf-ab35fc791d56-kube-api-access-bwxhd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.562892 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.562973 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.563535 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.574588 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxhd\" (UniqueName: \"kubernetes.io/projected/1d07405f-37a4-410c-b7bf-ab35fc791d56-kube-api-access-bwxhd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:44 crc kubenswrapper[4793]: I0217 21:05:44.654172 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:05:45 crc kubenswrapper[4793]: I0217 21:05:45.188546 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4"] Feb 17 21:05:45 crc kubenswrapper[4793]: I0217 21:05:45.210313 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" event={"ID":"1d07405f-37a4-410c-b7bf-ab35fc791d56","Type":"ContainerStarted","Data":"c871e5911c00844cfa68413cfeee79a17cbcc72606ae6f20056ec06d3d3959d5"} Feb 17 21:05:45 crc kubenswrapper[4793]: I0217 21:05:45.665365 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:05:46 crc kubenswrapper[4793]: I0217 21:05:46.225855 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" event={"ID":"1d07405f-37a4-410c-b7bf-ab35fc791d56","Type":"ContainerStarted","Data":"c87663a6b9a7034068cb8b92f8ef221e059de54d5217a9ea310fa7e35d337446"} Feb 17 21:05:46 crc kubenswrapper[4793]: I0217 21:05:46.255465 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" podStartSLOduration=1.789393293 podStartE2EDuration="2.25543288s" podCreationTimestamp="2026-02-17 21:05:44 +0000 UTC" firstStartedPulling="2026-02-17 21:05:45.196247912 +0000 UTC m=+3420.487946223" lastFinishedPulling="2026-02-17 21:05:45.662287459 +0000 UTC m=+3420.953985810" observedRunningTime="2026-02-17 21:05:46.24243622 +0000 UTC m=+3421.534134591" watchObservedRunningTime="2026-02-17 21:05:46.25543288 +0000 UTC m=+3421.547131221" Feb 17 21:05:57 crc kubenswrapper[4793]: I0217 21:05:57.539520 4793 scope.go:117] "RemoveContainer" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" Feb 17 21:05:58 crc kubenswrapper[4793]: I0217 21:05:58.359572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4"} Feb 17 21:06:00 crc kubenswrapper[4793]: I0217 21:06:00.394387 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" exitCode=1 Feb 17 21:06:00 crc kubenswrapper[4793]: I0217 21:06:00.394499 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4"} Feb 17 21:06:00 crc kubenswrapper[4793]: I0217 21:06:00.395876 4793 scope.go:117] "RemoveContainer" containerID="cd8d38e7a01ad1ae360e088c5ee98372c8db231f2695466dc3d2feb053ed3c0d" Feb 17 21:06:00 crc kubenswrapper[4793]: I0217 21:06:00.396506 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:00 crc kubenswrapper[4793]: E0217 21:06:00.396783 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:06:00 crc kubenswrapper[4793]: I0217 21:06:00.596183 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:06:01 crc kubenswrapper[4793]: I0217 21:06:01.407748 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:01 crc kubenswrapper[4793]: E0217 21:06:01.409073 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:06:05 crc kubenswrapper[4793]: I0217 21:06:05.596417 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:06:05 crc kubenswrapper[4793]: I0217 21:06:05.596898 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:06:05 crc kubenswrapper[4793]: I0217 21:06:05.596914 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:06:05 crc kubenswrapper[4793]: I0217 21:06:05.597574 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:05 crc kubenswrapper[4793]: E0217 21:06:05.597790 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:06:20 crc kubenswrapper[4793]: I0217 21:06:20.538775 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:20 crc kubenswrapper[4793]: E0217 21:06:20.540970 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:06:35 crc kubenswrapper[4793]: I0217 21:06:35.546669 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:35 crc kubenswrapper[4793]: E0217 21:06:35.547645 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:06:46 crc kubenswrapper[4793]: I0217 21:06:46.539455 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:46 crc kubenswrapper[4793]: E0217 21:06:46.540570 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:06:50 crc kubenswrapper[4793]: I0217 21:06:50.101818 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:06:50 crc kubenswrapper[4793]: I0217 21:06:50.102288 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:06:58 crc kubenswrapper[4793]: I0217 21:06:58.539150 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:06:58 crc kubenswrapper[4793]: E0217 21:06:58.540255 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.352273 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bwttp"] Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.358543 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.376619 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwttp"] Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.501518 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-utilities\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.501966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-catalog-content\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.502067 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzpd\" (UniqueName: \"kubernetes.io/projected/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-kube-api-access-vbzpd\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.604619 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-catalog-content\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.604987 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzpd\" (UniqueName: \"kubernetes.io/projected/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-kube-api-access-vbzpd\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.605129 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-utilities\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.605353 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-catalog-content\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.605507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-utilities\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.625444 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzpd\" (UniqueName: \"kubernetes.io/projected/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-kube-api-access-vbzpd\") pod \"redhat-operators-bwttp\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:00 crc kubenswrapper[4793]: I0217 21:07:00.699481 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:01 crc kubenswrapper[4793]: I0217 21:07:01.206543 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bwttp"] Feb 17 21:07:01 crc kubenswrapper[4793]: W0217 21:07:01.220109 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0de4f213_9aeb_471b_bdc3_ca39fb9460ed.slice/crio-7b31ccd49bb8e43886340d03c49d838e0121635ae5a8f49e98898b82adc19b8e WatchSource:0}: Error finding container 7b31ccd49bb8e43886340d03c49d838e0121635ae5a8f49e98898b82adc19b8e: Status 404 returned error can't find the container with id 7b31ccd49bb8e43886340d03c49d838e0121635ae5a8f49e98898b82adc19b8e Feb 17 21:07:02 crc kubenswrapper[4793]: I0217 21:07:02.126148 4793 generic.go:334] "Generic (PLEG): container finished" podID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerID="d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04" exitCode=0 Feb 17 21:07:02 crc kubenswrapper[4793]: I0217 21:07:02.126217 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerDied","Data":"d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04"} Feb 17 21:07:02 crc kubenswrapper[4793]: I0217 21:07:02.126256 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerStarted","Data":"7b31ccd49bb8e43886340d03c49d838e0121635ae5a8f49e98898b82adc19b8e"} Feb 17 21:07:03 crc kubenswrapper[4793]: I0217 21:07:03.144063 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerStarted","Data":"af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a"} Feb 17 21:07:07 crc kubenswrapper[4793]: I0217 21:07:07.193525 4793 generic.go:334] "Generic (PLEG): container finished" podID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerID="af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a" exitCode=0 Feb 17 21:07:07 crc kubenswrapper[4793]: I0217 21:07:07.193664 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerDied","Data":"af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a"} Feb 17 21:07:08 crc kubenswrapper[4793]: I0217 21:07:08.226971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerStarted","Data":"325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e"} Feb 17 21:07:08 crc kubenswrapper[4793]: I0217 21:07:08.270227 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bwttp" podStartSLOduration=2.765615263 podStartE2EDuration="8.27020113s" podCreationTimestamp="2026-02-17 21:07:00 +0000 UTC" firstStartedPulling="2026-02-17 21:07:02.128055426 +0000 UTC m=+3497.419753767" lastFinishedPulling="2026-02-17 21:07:07.632641313 +0000 UTC m=+3502.924339634" observedRunningTime="2026-02-17 21:07:08.262873969 +0000 UTC m=+3503.554572300" watchObservedRunningTime="2026-02-17 21:07:08.27020113 +0000 UTC m=+3503.561899481" Feb 17 21:07:09 crc kubenswrapper[4793]: I0217 21:07:09.539480 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:07:09 crc kubenswrapper[4793]: E0217 21:07:09.539944 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:10 crc kubenswrapper[4793]: I0217 21:07:10.700255 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:10 crc kubenswrapper[4793]: I0217 21:07:10.700327 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:11 crc kubenswrapper[4793]: I0217 21:07:11.757344 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bwttp" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="registry-server" probeResult="failure" output=< Feb 17 21:07:11 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:07:11 crc kubenswrapper[4793]: > Feb 17 21:07:20 crc kubenswrapper[4793]: I0217 21:07:20.101807 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:07:20 crc kubenswrapper[4793]: I0217 21:07:20.102727 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:07:20 crc kubenswrapper[4793]: I0217 21:07:20.784242 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:20 crc kubenswrapper[4793]: I0217 21:07:20.863634 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:21 crc kubenswrapper[4793]: I0217 21:07:21.035900 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bwttp"] Feb 17 21:07:22 crc kubenswrapper[4793]: I0217 21:07:22.372243 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bwttp" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="registry-server" containerID="cri-o://325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e" gracePeriod=2 Feb 17 21:07:22 crc kubenswrapper[4793]: I0217 21:07:22.856777 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.006614 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-utilities\") pod \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.006771 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbzpd\" (UniqueName: \"kubernetes.io/projected/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-kube-api-access-vbzpd\") pod \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.006878 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-catalog-content\") pod \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\" (UID: \"0de4f213-9aeb-471b-bdc3-ca39fb9460ed\") " Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.008147 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-utilities" (OuterVolumeSpecName: "utilities") pod "0de4f213-9aeb-471b-bdc3-ca39fb9460ed" (UID: "0de4f213-9aeb-471b-bdc3-ca39fb9460ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.016137 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-kube-api-access-vbzpd" (OuterVolumeSpecName: "kube-api-access-vbzpd") pod "0de4f213-9aeb-471b-bdc3-ca39fb9460ed" (UID: "0de4f213-9aeb-471b-bdc3-ca39fb9460ed"). InnerVolumeSpecName "kube-api-access-vbzpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.110479 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.110524 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbzpd\" (UniqueName: \"kubernetes.io/projected/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-kube-api-access-vbzpd\") on node \"crc\" DevicePath \"\"" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.163913 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0de4f213-9aeb-471b-bdc3-ca39fb9460ed" (UID: "0de4f213-9aeb-471b-bdc3-ca39fb9460ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.212344 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de4f213-9aeb-471b-bdc3-ca39fb9460ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.387160 4793 generic.go:334] "Generic (PLEG): container finished" podID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerID="325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e" exitCode=0 Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.387232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerDied","Data":"325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e"} Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.387291 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bwttp" event={"ID":"0de4f213-9aeb-471b-bdc3-ca39fb9460ed","Type":"ContainerDied","Data":"7b31ccd49bb8e43886340d03c49d838e0121635ae5a8f49e98898b82adc19b8e"} Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.387316 4793 scope.go:117] "RemoveContainer" containerID="325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.389031 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bwttp" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.425905 4793 scope.go:117] "RemoveContainer" containerID="af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.431803 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bwttp"] Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.441037 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bwttp"] Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.451094 4793 scope.go:117] "RemoveContainer" containerID="d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.504253 4793 scope.go:117] "RemoveContainer" containerID="325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e" Feb 17 21:07:23 crc kubenswrapper[4793]: E0217 21:07:23.505172 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e\": container with ID starting with 325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e not found: ID does not exist" containerID="325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.505264 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e"} err="failed to get container status \"325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e\": rpc error: code = NotFound desc = could not find container \"325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e\": container with ID starting with 325a31b34bf36017319b5574ca8761b2f4098a6630e5b56af7846e1e6d88646e not found: ID does not exist" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.505309 4793 scope.go:117] "RemoveContainer" containerID="af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a" Feb 17 21:07:23 crc kubenswrapper[4793]: E0217 21:07:23.505743 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a\": container with ID starting with af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a not found: ID does not exist" containerID="af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.505792 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a"} err="failed to get container status \"af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a\": rpc error: code = NotFound desc = could not find container \"af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a\": container with ID starting with af27f1eb2f26c6a4b4cbaf5f8144693429f9d1ba3b2a2f93c5fadf447b4b985a not found: ID does not exist" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.505825 4793 scope.go:117] "RemoveContainer" containerID="d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04" Feb 17 21:07:23 crc kubenswrapper[4793]: E0217 21:07:23.506145 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04\": container with ID starting with d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04 not found: ID does not exist" containerID="d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.506349 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04"} err="failed to get container status \"d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04\": rpc error: code = NotFound desc = could not find container \"d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04\": container with ID starting with d11ad1bb226da9e1869c516b14a557a894b11910a9a5428d9cba20d6de6c2a04 not found: ID does not exist" Feb 17 21:07:23 crc kubenswrapper[4793]: I0217 21:07:23.552129 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" path="/var/lib/kubelet/pods/0de4f213-9aeb-471b-bdc3-ca39fb9460ed/volumes" Feb 17 21:07:24 crc kubenswrapper[4793]: I0217 21:07:24.539419 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:07:25 crc kubenswrapper[4793]: I0217 21:07:25.415504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06"} Feb 17 21:07:25 crc kubenswrapper[4793]: I0217 21:07:25.596509 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:07:25 crc kubenswrapper[4793]: I0217 21:07:25.596890 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:07:25 crc kubenswrapper[4793]: I0217 21:07:25.640959 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 17 21:07:26 crc kubenswrapper[4793]: I0217 21:07:26.486789 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 17 21:07:27 crc kubenswrapper[4793]: I0217 21:07:27.445792 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" exitCode=1 Feb 17 21:07:27 crc kubenswrapper[4793]: I0217 21:07:27.445874 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06"} Feb 17 21:07:27 crc kubenswrapper[4793]: I0217 21:07:27.445934 4793 scope.go:117] "RemoveContainer" containerID="fa2faedeac89660bcfdf29afa5ce4ae10aa15331b605f8f44801998713e678d4" Feb 17 21:07:27 crc kubenswrapper[4793]: I0217 21:07:27.446990 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:07:27 crc kubenswrapper[4793]: E0217 21:07:27.447503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:28 crc kubenswrapper[4793]: I0217 21:07:28.459013 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:07:28 crc kubenswrapper[4793]: E0217 21:07:28.459778 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:30 crc kubenswrapper[4793]: I0217 21:07:30.596039 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:07:30 crc kubenswrapper[4793]: I0217 21:07:30.597391 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:07:30 crc kubenswrapper[4793]: E0217 21:07:30.597646 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:35 crc kubenswrapper[4793]: I0217 21:07:35.595866 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:07:35 crc kubenswrapper[4793]: I0217 21:07:35.596310 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:07:35 crc kubenswrapper[4793]: I0217 21:07:35.596914 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:07:35 crc kubenswrapper[4793]: E0217 21:07:35.597129 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:49 crc kubenswrapper[4793]: I0217 21:07:49.539666 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:07:49 crc kubenswrapper[4793]: E0217 21:07:49.541195 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.102671 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.103212 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.103278 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.104492 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.104587 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" gracePeriod=600 Feb 17 21:07:50 crc kubenswrapper[4793]: E0217 21:07:50.235899 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.702956 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" exitCode=0 Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.703000 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328"} Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.703033 4793 scope.go:117] "RemoveContainer" containerID="9df5ee479ef9692a62e5c2da98082da72107bf4c0ed624a84085c35ae2963246" Feb 17 21:07:50 crc kubenswrapper[4793]: I0217 21:07:50.704395 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:07:50 crc kubenswrapper[4793]: E0217 21:07:50.705360 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:08:03 crc kubenswrapper[4793]: I0217 21:08:03.539759 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:08:03 crc kubenswrapper[4793]: E0217 21:08:03.541111 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:08:03 crc kubenswrapper[4793]: I0217 21:08:03.541617 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:08:03 crc kubenswrapper[4793]: E0217 21:08:03.541851 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:08:15 crc kubenswrapper[4793]: I0217 21:08:15.547171 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:08:15 crc kubenswrapper[4793]: E0217 21:08:15.547919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:08:18 crc kubenswrapper[4793]: I0217 21:08:18.539078 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:08:18 crc kubenswrapper[4793]: E0217 21:08:18.539895 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.539202 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:08:27 crc kubenswrapper[4793]: E0217 21:08:27.540444 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.610372 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hbbjs"] Feb 17 21:08:27 crc kubenswrapper[4793]: E0217 21:08:27.610807 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="extract-content" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.610823 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="extract-content" Feb 17 21:08:27 crc kubenswrapper[4793]: E0217 21:08:27.610896 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="extract-utilities" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.610905 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="extract-utilities" Feb 17 21:08:27 crc kubenswrapper[4793]: E0217 21:08:27.610915 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="registry-server" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.610920 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="registry-server" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.611151 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de4f213-9aeb-471b-bdc3-ca39fb9460ed" containerName="registry-server" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.617114 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.628125 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbbjs"] Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.731455 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-catalog-content\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.731534 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-utilities\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.731704 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x9z\" (UniqueName: \"kubernetes.io/projected/bfb96f74-ddcd-408b-a753-8823d3488062-kube-api-access-48x9z\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.833223 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48x9z\" (UniqueName: \"kubernetes.io/projected/bfb96f74-ddcd-408b-a753-8823d3488062-kube-api-access-48x9z\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.833362 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-catalog-content\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.833418 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-utilities\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.834073 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-utilities\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.834671 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-catalog-content\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.856761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48x9z\" (UniqueName: \"kubernetes.io/projected/bfb96f74-ddcd-408b-a753-8823d3488062-kube-api-access-48x9z\") pod \"certified-operators-hbbjs\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:27 crc kubenswrapper[4793]: I0217 21:08:27.939047 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:28 crc kubenswrapper[4793]: I0217 21:08:28.429938 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbbjs"] Feb 17 21:08:29 crc kubenswrapper[4793]: I0217 21:08:29.141145 4793 generic.go:334] "Generic (PLEG): container finished" podID="bfb96f74-ddcd-408b-a753-8823d3488062" containerID="a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96" exitCode=0 Feb 17 21:08:29 crc kubenswrapper[4793]: I0217 21:08:29.141199 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbjs" event={"ID":"bfb96f74-ddcd-408b-a753-8823d3488062","Type":"ContainerDied","Data":"a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96"} Feb 17 21:08:29 crc kubenswrapper[4793]: I0217 21:08:29.141476 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbjs" event={"ID":"bfb96f74-ddcd-408b-a753-8823d3488062","Type":"ContainerStarted","Data":"8de903fa9db39a792628440bc712238531b091ccd5dc32edd59fbf85f00a8358"} Feb 17 21:08:29 crc kubenswrapper[4793]: I0217 21:08:29.144834 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:08:31 crc kubenswrapper[4793]: I0217 21:08:31.164667 4793 generic.go:334] "Generic (PLEG): container finished" podID="bfb96f74-ddcd-408b-a753-8823d3488062" containerID="6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12" exitCode=0 Feb 17 21:08:31 crc kubenswrapper[4793]: I0217 21:08:31.164845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbjs" event={"ID":"bfb96f74-ddcd-408b-a753-8823d3488062","Type":"ContainerDied","Data":"6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12"} Feb 17 21:08:32 crc kubenswrapper[4793]: I0217 21:08:32.195925 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbjs" event={"ID":"bfb96f74-ddcd-408b-a753-8823d3488062","Type":"ContainerStarted","Data":"3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c"} Feb 17 21:08:32 crc kubenswrapper[4793]: I0217 21:08:32.233228 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hbbjs" podStartSLOduration=2.788479514 podStartE2EDuration="5.233210891s" podCreationTimestamp="2026-02-17 21:08:27 +0000 UTC" firstStartedPulling="2026-02-17 21:08:29.144383968 +0000 UTC m=+3584.436082319" lastFinishedPulling="2026-02-17 21:08:31.589115345 +0000 UTC m=+3586.880813696" observedRunningTime="2026-02-17 21:08:32.22826962 +0000 UTC m=+3587.519967981" watchObservedRunningTime="2026-02-17 21:08:32.233210891 +0000 UTC m=+3587.524909202" Feb 17 21:08:32 crc kubenswrapper[4793]: I0217 21:08:32.540019 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:08:32 crc kubenswrapper[4793]: E0217 21:08:32.540284 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:08:37 crc kubenswrapper[4793]: I0217 21:08:37.940004 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:37 crc kubenswrapper[4793]: I0217 21:08:37.940997 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:38 crc kubenswrapper[4793]: I0217 21:08:38.014864 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:38 crc kubenswrapper[4793]: I0217 21:08:38.390519 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:38 crc kubenswrapper[4793]: I0217 21:08:38.450071 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbbjs"] Feb 17 21:08:40 crc kubenswrapper[4793]: I0217 21:08:40.328353 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hbbjs" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="registry-server" containerID="cri-o://3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c" gracePeriod=2 Feb 17 21:08:40 crc kubenswrapper[4793]: I0217 21:08:40.862826 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.061439 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-catalog-content\") pod \"bfb96f74-ddcd-408b-a753-8823d3488062\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.061525 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-utilities\") pod \"bfb96f74-ddcd-408b-a753-8823d3488062\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.061629 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48x9z\" (UniqueName: \"kubernetes.io/projected/bfb96f74-ddcd-408b-a753-8823d3488062-kube-api-access-48x9z\") pod \"bfb96f74-ddcd-408b-a753-8823d3488062\" (UID: \"bfb96f74-ddcd-408b-a753-8823d3488062\") " Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.062446 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-utilities" (OuterVolumeSpecName: "utilities") pod "bfb96f74-ddcd-408b-a753-8823d3488062" (UID: "bfb96f74-ddcd-408b-a753-8823d3488062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.067096 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb96f74-ddcd-408b-a753-8823d3488062-kube-api-access-48x9z" (OuterVolumeSpecName: "kube-api-access-48x9z") pod "bfb96f74-ddcd-408b-a753-8823d3488062" (UID: "bfb96f74-ddcd-408b-a753-8823d3488062"). InnerVolumeSpecName "kube-api-access-48x9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.111885 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfb96f74-ddcd-408b-a753-8823d3488062" (UID: "bfb96f74-ddcd-408b-a753-8823d3488062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.163718 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.163755 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb96f74-ddcd-408b-a753-8823d3488062-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.163769 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48x9z\" (UniqueName: \"kubernetes.io/projected/bfb96f74-ddcd-408b-a753-8823d3488062-kube-api-access-48x9z\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.355879 4793 generic.go:334] "Generic (PLEG): container finished" podID="bfb96f74-ddcd-408b-a753-8823d3488062" containerID="3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c" exitCode=0 Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.355927 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbjs" event={"ID":"bfb96f74-ddcd-408b-a753-8823d3488062","Type":"ContainerDied","Data":"3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c"} Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.355954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbjs" event={"ID":"bfb96f74-ddcd-408b-a753-8823d3488062","Type":"ContainerDied","Data":"8de903fa9db39a792628440bc712238531b091ccd5dc32edd59fbf85f00a8358"} Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.355975 4793 scope.go:117] "RemoveContainer" containerID="3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.355988 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbjs" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.377834 4793 scope.go:117] "RemoveContainer" containerID="6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.397087 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbbjs"] Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.405469 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hbbjs"] Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.412942 4793 scope.go:117] "RemoveContainer" containerID="a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.480849 4793 scope.go:117] "RemoveContainer" containerID="3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c" Feb 17 21:08:41 crc kubenswrapper[4793]: E0217 21:08:41.481336 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c\": container with ID starting with 3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c not found: ID does not exist" containerID="3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.481380 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c"} err="failed to get container status \"3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c\": rpc error: code = NotFound desc = could not find container \"3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c\": container with ID starting with 3621d34587b87609fbb76e5c911b98d8e6c85fbb880630777b7b9ac2a08a594c not found: ID does not exist" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.481408 4793 scope.go:117] "RemoveContainer" containerID="6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12" Feb 17 21:08:41 crc kubenswrapper[4793]: E0217 21:08:41.481667 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12\": container with ID starting with 6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12 not found: ID does not exist" containerID="6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.481716 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12"} err="failed to get container status \"6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12\": rpc error: code = NotFound desc = could not find container \"6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12\": container with ID starting with 6c54295dc813ba7c6bbe9eef31ad040c9ececd322b6f68559579faedab4e2e12 not found: ID does not exist" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.481735 4793 scope.go:117] "RemoveContainer" containerID="a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96" Feb 17 21:08:41 crc kubenswrapper[4793]: E0217 21:08:41.482141 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96\": container with ID starting with a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96 not found: ID does not exist" containerID="a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.482169 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96"} err="failed to get container status \"a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96\": rpc error: code = NotFound desc = could not find container \"a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96\": container with ID starting with a7f7a79d1e1a8f2a39cdb709bb3f34714df1b65d78e26dcf728f0bdc0c40cc96 not found: ID does not exist" Feb 17 21:08:41 crc kubenswrapper[4793]: I0217 21:08:41.551118 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" path="/var/lib/kubelet/pods/bfb96f74-ddcd-408b-a753-8823d3488062/volumes" Feb 17 21:08:42 crc kubenswrapper[4793]: I0217 21:08:42.538938 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:08:42 crc kubenswrapper[4793]: E0217 21:08:42.539477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:08:45 crc kubenswrapper[4793]: I0217 21:08:45.400970 4793 generic.go:334] "Generic (PLEG): container finished" podID="1d07405f-37a4-410c-b7bf-ab35fc791d56" containerID="c87663a6b9a7034068cb8b92f8ef221e059de54d5217a9ea310fa7e35d337446" exitCode=0 Feb 17 21:08:45 crc kubenswrapper[4793]: I0217 21:08:45.401083 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" event={"ID":"1d07405f-37a4-410c-b7bf-ab35fc791d56","Type":"ContainerDied","Data":"c87663a6b9a7034068cb8b92f8ef221e059de54d5217a9ea310fa7e35d337446"} Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.539614 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:08:46 crc kubenswrapper[4793]: E0217 21:08:46.540286 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.875544 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.991065 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwxhd\" (UniqueName: \"kubernetes.io/projected/1d07405f-37a4-410c-b7bf-ab35fc791d56-kube-api-access-bwxhd\") pod \"1d07405f-37a4-410c-b7bf-ab35fc791d56\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.991195 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-inventory\") pod \"1d07405f-37a4-410c-b7bf-ab35fc791d56\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.991514 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-ssh-key-openstack-edpm-ipam\") pod \"1d07405f-37a4-410c-b7bf-ab35fc791d56\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.991592 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-bootstrap-combined-ca-bundle\") pod \"1d07405f-37a4-410c-b7bf-ab35fc791d56\" (UID: \"1d07405f-37a4-410c-b7bf-ab35fc791d56\") " Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.997857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1d07405f-37a4-410c-b7bf-ab35fc791d56" (UID: "1d07405f-37a4-410c-b7bf-ab35fc791d56"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:08:46 crc kubenswrapper[4793]: I0217 21:08:46.998152 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d07405f-37a4-410c-b7bf-ab35fc791d56-kube-api-access-bwxhd" (OuterVolumeSpecName: "kube-api-access-bwxhd") pod "1d07405f-37a4-410c-b7bf-ab35fc791d56" (UID: "1d07405f-37a4-410c-b7bf-ab35fc791d56"). InnerVolumeSpecName "kube-api-access-bwxhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.019904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-inventory" (OuterVolumeSpecName: "inventory") pod "1d07405f-37a4-410c-b7bf-ab35fc791d56" (UID: "1d07405f-37a4-410c-b7bf-ab35fc791d56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.042139 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d07405f-37a4-410c-b7bf-ab35fc791d56" (UID: "1d07405f-37a4-410c-b7bf-ab35fc791d56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.094833 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.094872 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.094885 4793 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d07405f-37a4-410c-b7bf-ab35fc791d56-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.094899 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwxhd\" (UniqueName: \"kubernetes.io/projected/1d07405f-37a4-410c-b7bf-ab35fc791d56-kube-api-access-bwxhd\") on node \"crc\" DevicePath \"\"" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.427861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" event={"ID":"1d07405f-37a4-410c-b7bf-ab35fc791d56","Type":"ContainerDied","Data":"c871e5911c00844cfa68413cfeee79a17cbcc72606ae6f20056ec06d3d3959d5"} Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.427900 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c871e5911c00844cfa68413cfeee79a17cbcc72606ae6f20056ec06d3d3959d5" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.427978 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.559252 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg"] Feb 17 21:08:47 crc kubenswrapper[4793]: E0217 21:08:47.559891 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d07405f-37a4-410c-b7bf-ab35fc791d56" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.559907 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d07405f-37a4-410c-b7bf-ab35fc791d56" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 21:08:47 crc kubenswrapper[4793]: E0217 21:08:47.559926 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="extract-utilities" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.559934 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="extract-utilities" Feb 17 21:08:47 crc kubenswrapper[4793]: E0217 21:08:47.559963 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="extract-content" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.559972 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="extract-content" Feb 17 21:08:47 crc kubenswrapper[4793]: E0217 21:08:47.560001 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="registry-server" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.560009 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="registry-server" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.560236 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb96f74-ddcd-408b-a753-8823d3488062" containerName="registry-server" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.560256 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d07405f-37a4-410c-b7bf-ab35fc791d56" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.562700 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg"] Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.562804 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.569658 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.569870 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.572144 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.572341 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.705802 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxw6h\" (UniqueName: \"kubernetes.io/projected/ef25c491-c6e8-4fc8-948b-ad2de1484956-kube-api-access-hxw6h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.706972 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.707083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.809071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.809552 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxw6h\" (UniqueName: \"kubernetes.io/projected/ef25c491-c6e8-4fc8-948b-ad2de1484956-kube-api-access-hxw6h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.810106 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.815719 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.819017 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.849898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxw6h\" (UniqueName: \"kubernetes.io/projected/ef25c491-c6e8-4fc8-948b-ad2de1484956-kube-api-access-hxw6h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:47 crc kubenswrapper[4793]: I0217 21:08:47.887667 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:08:48 crc kubenswrapper[4793]: I0217 21:08:48.283484 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg"] Feb 17 21:08:48 crc kubenswrapper[4793]: I0217 21:08:48.438922 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" event={"ID":"ef25c491-c6e8-4fc8-948b-ad2de1484956","Type":"ContainerStarted","Data":"6a902b81c3e32c0de321fcba67706bef89c2424ea0889cb19c0f7101955af305"} Feb 17 21:08:49 crc kubenswrapper[4793]: I0217 21:08:49.458954 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" event={"ID":"ef25c491-c6e8-4fc8-948b-ad2de1484956","Type":"ContainerStarted","Data":"e6cff3a530d469770c66561acd54459576244533bfe0be14b4594c895373b97d"} Feb 17 21:08:49 crc kubenswrapper[4793]: I0217 21:08:49.490559 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" podStartSLOduration=2.025153674 podStartE2EDuration="2.490537537s" podCreationTimestamp="2026-02-17 21:08:47 +0000 UTC" firstStartedPulling="2026-02-17 21:08:48.289356304 +0000 UTC m=+3603.581054615" lastFinishedPulling="2026-02-17 21:08:48.754740167 +0000 UTC m=+3604.046438478" observedRunningTime="2026-02-17 21:08:49.476973944 +0000 UTC m=+3604.768672295" watchObservedRunningTime="2026-02-17 21:08:49.490537537 +0000 UTC m=+3604.782235869" Feb 17 21:08:56 crc kubenswrapper[4793]: I0217 21:08:56.538568 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:08:56 crc kubenswrapper[4793]: E0217 21:08:56.539658 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:09:00 crc kubenswrapper[4793]: I0217 21:09:00.539261 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:09:00 crc kubenswrapper[4793]: E0217 21:09:00.540062 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:09:07 crc kubenswrapper[4793]: I0217 21:09:07.538729 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:09:07 crc kubenswrapper[4793]: E0217 21:09:07.539782 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:09:14 crc kubenswrapper[4793]: I0217 21:09:14.539262 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:09:14 crc kubenswrapper[4793]: E0217 21:09:14.540192 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:09:19 crc kubenswrapper[4793]: I0217 21:09:19.539813 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:09:19 crc kubenswrapper[4793]: E0217 21:09:19.540411 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.710677 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rbg92"] Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.715729 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.734065 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rbg92"] Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.859675 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmntv\" (UniqueName: \"kubernetes.io/projected/5a840e43-fe3d-4adf-8d7f-787454ea5109-kube-api-access-kmntv\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.859751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-utilities\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.860004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-catalog-content\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.961456 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-catalog-content\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.961571 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmntv\" (UniqueName: \"kubernetes.io/projected/5a840e43-fe3d-4adf-8d7f-787454ea5109-kube-api-access-kmntv\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.961604 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-utilities\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.962057 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-catalog-content\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.962122 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-utilities\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:22 crc kubenswrapper[4793]: I0217 21:09:22.983945 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmntv\" (UniqueName: \"kubernetes.io/projected/5a840e43-fe3d-4adf-8d7f-787454ea5109-kube-api-access-kmntv\") pod \"community-operators-rbg92\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:23 crc kubenswrapper[4793]: I0217 21:09:23.038855 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:23 crc kubenswrapper[4793]: W0217 21:09:23.591964 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a840e43_fe3d_4adf_8d7f_787454ea5109.slice/crio-0feae6f251cae80aa27d91544c7f678caef6a6ad138476026e6931464a83b493 WatchSource:0}: Error finding container 0feae6f251cae80aa27d91544c7f678caef6a6ad138476026e6931464a83b493: Status 404 returned error can't find the container with id 0feae6f251cae80aa27d91544c7f678caef6a6ad138476026e6931464a83b493 Feb 17 21:09:23 crc kubenswrapper[4793]: I0217 21:09:23.597476 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rbg92"] Feb 17 21:09:23 crc kubenswrapper[4793]: I0217 21:09:23.758522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerStarted","Data":"0feae6f251cae80aa27d91544c7f678caef6a6ad138476026e6931464a83b493"} Feb 17 21:09:24 crc kubenswrapper[4793]: I0217 21:09:24.773331 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerID="575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786" exitCode=0 Feb 17 21:09:24 crc kubenswrapper[4793]: I0217 21:09:24.773409 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerDied","Data":"575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786"} Feb 17 21:09:25 crc kubenswrapper[4793]: I0217 21:09:25.547860 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:09:25 crc kubenswrapper[4793]: E0217 21:09:25.548523 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:09:26 crc kubenswrapper[4793]: I0217 21:09:26.798491 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerStarted","Data":"0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94"} Feb 17 21:09:27 crc kubenswrapper[4793]: I0217 21:09:27.807573 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerID="0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94" exitCode=0 Feb 17 21:09:27 crc kubenswrapper[4793]: I0217 21:09:27.807621 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerDied","Data":"0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94"} Feb 17 21:09:28 crc kubenswrapper[4793]: I0217 21:09:28.819420 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerStarted","Data":"b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb"} Feb 17 21:09:28 crc kubenswrapper[4793]: I0217 21:09:28.854046 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rbg92" podStartSLOduration=3.424338223 podStartE2EDuration="6.854019956s" podCreationTimestamp="2026-02-17 21:09:22 +0000 UTC" firstStartedPulling="2026-02-17 21:09:24.775996832 +0000 UTC m=+3640.067695193" lastFinishedPulling="2026-02-17 21:09:28.205678575 +0000 UTC m=+3643.497376926" observedRunningTime="2026-02-17 21:09:28.838700289 +0000 UTC m=+3644.130398630" watchObservedRunningTime="2026-02-17 21:09:28.854019956 +0000 UTC m=+3644.145718297" Feb 17 21:09:31 crc kubenswrapper[4793]: I0217 21:09:31.539111 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:09:31 crc kubenswrapper[4793]: E0217 21:09:31.539861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:09:33 crc kubenswrapper[4793]: I0217 21:09:33.039780 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:33 crc kubenswrapper[4793]: I0217 21:09:33.039889 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:33 crc kubenswrapper[4793]: I0217 21:09:33.082865 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:33 crc kubenswrapper[4793]: I0217 21:09:33.914622 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:33 crc kubenswrapper[4793]: I0217 21:09:33.966808 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rbg92"] Feb 17 21:09:35 crc kubenswrapper[4793]: I0217 21:09:35.879176 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rbg92" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="registry-server" containerID="cri-o://b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb" gracePeriod=2 Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.418324 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.539309 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:09:36 crc kubenswrapper[4793]: E0217 21:09:36.539725 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.564111 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmntv\" (UniqueName: \"kubernetes.io/projected/5a840e43-fe3d-4adf-8d7f-787454ea5109-kube-api-access-kmntv\") pod \"5a840e43-fe3d-4adf-8d7f-787454ea5109\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.564209 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-utilities\") pod \"5a840e43-fe3d-4adf-8d7f-787454ea5109\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.564543 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-catalog-content\") pod \"5a840e43-fe3d-4adf-8d7f-787454ea5109\" (UID: \"5a840e43-fe3d-4adf-8d7f-787454ea5109\") " Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.566433 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-utilities" (OuterVolumeSpecName: "utilities") pod "5a840e43-fe3d-4adf-8d7f-787454ea5109" (UID: "5a840e43-fe3d-4adf-8d7f-787454ea5109"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.571011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a840e43-fe3d-4adf-8d7f-787454ea5109-kube-api-access-kmntv" (OuterVolumeSpecName: "kube-api-access-kmntv") pod "5a840e43-fe3d-4adf-8d7f-787454ea5109" (UID: "5a840e43-fe3d-4adf-8d7f-787454ea5109"). InnerVolumeSpecName "kube-api-access-kmntv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.667857 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmntv\" (UniqueName: \"kubernetes.io/projected/5a840e43-fe3d-4adf-8d7f-787454ea5109-kube-api-access-kmntv\") on node \"crc\" DevicePath \"\"" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.668178 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.768487 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a840e43-fe3d-4adf-8d7f-787454ea5109" (UID: "5a840e43-fe3d-4adf-8d7f-787454ea5109"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.770637 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a840e43-fe3d-4adf-8d7f-787454ea5109-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.887543 4793 generic.go:334] "Generic (PLEG): container finished" podID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerID="b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb" exitCode=0 Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.887591 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rbg92" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.887607 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerDied","Data":"b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb"} Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.889032 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rbg92" event={"ID":"5a840e43-fe3d-4adf-8d7f-787454ea5109","Type":"ContainerDied","Data":"0feae6f251cae80aa27d91544c7f678caef6a6ad138476026e6931464a83b493"} Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.889052 4793 scope.go:117] "RemoveContainer" containerID="b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.939057 4793 scope.go:117] "RemoveContainer" containerID="0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94" Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.990046 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rbg92"] Feb 17 21:09:36 crc kubenswrapper[4793]: I0217 21:09:36.999716 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rbg92"] Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.000219 4793 scope.go:117] "RemoveContainer" containerID="575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.024838 4793 scope.go:117] "RemoveContainer" containerID="b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb" Feb 17 21:09:37 crc kubenswrapper[4793]: E0217 21:09:37.025271 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb\": container with ID starting with b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb not found: ID does not exist" containerID="b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.025327 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb"} err="failed to get container status \"b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb\": rpc error: code = NotFound desc = could not find container \"b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb\": container with ID starting with b354d7ef6d458f11e0598d105ce7442b7582619bc6c7229d8718bfb3ed4310bb not found: ID does not exist" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.025360 4793 scope.go:117] "RemoveContainer" containerID="0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94" Feb 17 21:09:37 crc kubenswrapper[4793]: E0217 21:09:37.025864 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94\": container with ID starting with 0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94 not found: ID does not exist" containerID="0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.025900 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94"} err="failed to get container status \"0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94\": rpc error: code = NotFound desc = could not find container \"0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94\": container with ID starting with 0cb5ad15ae9e099573071319e41ca0745ef112da714226b2943f30567226fb94 not found: ID does not exist" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.025923 4793 scope.go:117] "RemoveContainer" containerID="575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786" Feb 17 21:09:37 crc kubenswrapper[4793]: E0217 21:09:37.026289 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786\": container with ID starting with 575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786 not found: ID does not exist" containerID="575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.026317 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786"} err="failed to get container status \"575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786\": rpc error: code = NotFound desc = could not find container \"575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786\": container with ID starting with 575481ce9cddf172833f716f1221c3f469137b978e0d915228837997bd551786 not found: ID does not exist" Feb 17 21:09:37 crc kubenswrapper[4793]: I0217 21:09:37.558993 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" path="/var/lib/kubelet/pods/5a840e43-fe3d-4adf-8d7f-787454ea5109/volumes" Feb 17 21:09:43 crc kubenswrapper[4793]: I0217 21:09:43.539000 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:09:43 crc kubenswrapper[4793]: E0217 21:09:43.540058 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:09:48 crc kubenswrapper[4793]: I0217 21:09:48.538985 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:09:48 crc kubenswrapper[4793]: E0217 21:09:48.539569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:09:58 crc kubenswrapper[4793]: I0217 21:09:58.538671 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:09:58 crc kubenswrapper[4793]: E0217 21:09:58.539517 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:10:02 crc kubenswrapper[4793]: I0217 21:10:02.538932 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:10:02 crc kubenswrapper[4793]: E0217 21:10:02.540118 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:12 crc kubenswrapper[4793]: I0217 21:10:12.539226 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:10:12 crc kubenswrapper[4793]: E0217 21:10:12.540038 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:10:17 crc kubenswrapper[4793]: I0217 21:10:17.539174 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:10:18 crc kubenswrapper[4793]: I0217 21:10:18.347540 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e"} Feb 17 21:10:20 crc kubenswrapper[4793]: I0217 21:10:20.371517 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" exitCode=1 Feb 17 21:10:20 crc kubenswrapper[4793]: I0217 21:10:20.371612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e"} Feb 17 21:10:20 crc kubenswrapper[4793]: I0217 21:10:20.372123 4793 scope.go:117] "RemoveContainer" containerID="f8783260b99d812eafacfddbf87be628831d52753405f636580c71f44653bb06" Feb 17 21:10:20 crc kubenswrapper[4793]: I0217 21:10:20.372989 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:10:20 crc kubenswrapper[4793]: E0217 21:10:20.373483 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:20 crc kubenswrapper[4793]: I0217 21:10:20.596806 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:10:21 crc kubenswrapper[4793]: I0217 21:10:21.380829 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:10:21 crc kubenswrapper[4793]: E0217 21:10:21.381052 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:24 crc kubenswrapper[4793]: I0217 21:10:24.538864 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:10:24 crc kubenswrapper[4793]: E0217 21:10:24.539569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:10:25 crc kubenswrapper[4793]: I0217 21:10:25.420504 4793 generic.go:334] "Generic (PLEG): container finished" podID="ef25c491-c6e8-4fc8-948b-ad2de1484956" containerID="e6cff3a530d469770c66561acd54459576244533bfe0be14b4594c895373b97d" exitCode=0 Feb 17 21:10:25 crc kubenswrapper[4793]: I0217 21:10:25.420557 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" event={"ID":"ef25c491-c6e8-4fc8-948b-ad2de1484956","Type":"ContainerDied","Data":"e6cff3a530d469770c66561acd54459576244533bfe0be14b4594c895373b97d"} Feb 17 21:10:25 crc kubenswrapper[4793]: I0217 21:10:25.596092 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:10:25 crc kubenswrapper[4793]: I0217 21:10:25.596148 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:10:25 crc kubenswrapper[4793]: I0217 21:10:25.596163 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:10:25 crc kubenswrapper[4793]: I0217 21:10:25.597193 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:10:25 crc kubenswrapper[4793]: E0217 21:10:25.597620 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:26 crc kubenswrapper[4793]: I0217 21:10:26.434857 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:10:26 crc kubenswrapper[4793]: E0217 21:10:26.435381 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:26 crc kubenswrapper[4793]: I0217 21:10:26.872231 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:10:26 crc kubenswrapper[4793]: I0217 21:10:26.995549 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxw6h\" (UniqueName: \"kubernetes.io/projected/ef25c491-c6e8-4fc8-948b-ad2de1484956-kube-api-access-hxw6h\") pod \"ef25c491-c6e8-4fc8-948b-ad2de1484956\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " Feb 17 21:10:26 crc kubenswrapper[4793]: I0217 21:10:26.995709 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-inventory\") pod \"ef25c491-c6e8-4fc8-948b-ad2de1484956\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " Feb 17 21:10:26 crc kubenswrapper[4793]: I0217 21:10:26.995863 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-ssh-key-openstack-edpm-ipam\") pod \"ef25c491-c6e8-4fc8-948b-ad2de1484956\" (UID: \"ef25c491-c6e8-4fc8-948b-ad2de1484956\") " Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.002746 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef25c491-c6e8-4fc8-948b-ad2de1484956-kube-api-access-hxw6h" (OuterVolumeSpecName: "kube-api-access-hxw6h") pod "ef25c491-c6e8-4fc8-948b-ad2de1484956" (UID: "ef25c491-c6e8-4fc8-948b-ad2de1484956"). InnerVolumeSpecName "kube-api-access-hxw6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.032051 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ef25c491-c6e8-4fc8-948b-ad2de1484956" (UID: "ef25c491-c6e8-4fc8-948b-ad2de1484956"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.039659 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-inventory" (OuterVolumeSpecName: "inventory") pod "ef25c491-c6e8-4fc8-948b-ad2de1484956" (UID: "ef25c491-c6e8-4fc8-948b-ad2de1484956"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.099580 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxw6h\" (UniqueName: \"kubernetes.io/projected/ef25c491-c6e8-4fc8-948b-ad2de1484956-kube-api-access-hxw6h\") on node \"crc\" DevicePath \"\"" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.099627 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.099639 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ef25c491-c6e8-4fc8-948b-ad2de1484956-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.449199 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" event={"ID":"ef25c491-c6e8-4fc8-948b-ad2de1484956","Type":"ContainerDied","Data":"6a902b81c3e32c0de321fcba67706bef89c2424ea0889cb19c0f7101955af305"} Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.449274 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.449289 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a902b81c3e32c0de321fcba67706bef89c2424ea0889cb19c0f7101955af305" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.586755 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z"] Feb 17 21:10:27 crc kubenswrapper[4793]: E0217 21:10:27.587135 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="extract-content" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.587155 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="extract-content" Feb 17 21:10:27 crc kubenswrapper[4793]: E0217 21:10:27.587167 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef25c491-c6e8-4fc8-948b-ad2de1484956" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.587176 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef25c491-c6e8-4fc8-948b-ad2de1484956" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 21:10:27 crc kubenswrapper[4793]: E0217 21:10:27.587190 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="registry-server" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.587196 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="registry-server" Feb 17 21:10:27 crc kubenswrapper[4793]: E0217 21:10:27.587210 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="extract-utilities" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.587217 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="extract-utilities" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.587386 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef25c491-c6e8-4fc8-948b-ad2de1484956" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.587400 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a840e43-fe3d-4adf-8d7f-787454ea5109" containerName="registry-server" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.588105 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.592470 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.592646 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.592989 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.593184 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.604214 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z"] Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.609360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.609442 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.609532 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grb8q\" (UniqueName: \"kubernetes.io/projected/3243a2e1-dbae-438c-934c-6ecb775b33b0-kube-api-access-grb8q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.711912 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grb8q\" (UniqueName: \"kubernetes.io/projected/3243a2e1-dbae-438c-934c-6ecb775b33b0-kube-api-access-grb8q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.711989 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.712039 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.716811 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.717027 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.727565 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grb8q\" (UniqueName: \"kubernetes.io/projected/3243a2e1-dbae-438c-934c-6ecb775b33b0-kube-api-access-grb8q\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:27 crc kubenswrapper[4793]: I0217 21:10:27.915718 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:10:28 crc kubenswrapper[4793]: I0217 21:10:28.473925 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z"] Feb 17 21:10:29 crc kubenswrapper[4793]: I0217 21:10:29.494430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" event={"ID":"3243a2e1-dbae-438c-934c-6ecb775b33b0","Type":"ContainerStarted","Data":"3d1753c1aa4d2a10b2d73c3eb9a6da00916288500ffbfd18eb7927f2b8df358d"} Feb 17 21:10:29 crc kubenswrapper[4793]: I0217 21:10:29.494845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" event={"ID":"3243a2e1-dbae-438c-934c-6ecb775b33b0","Type":"ContainerStarted","Data":"89c5177308b22274f68cfb21f4228a4582968411be015bea5a8e99f501b4be8b"} Feb 17 21:10:29 crc kubenswrapper[4793]: I0217 21:10:29.521340 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" podStartSLOduration=2.066606868 podStartE2EDuration="2.521324338s" podCreationTimestamp="2026-02-17 21:10:27 +0000 UTC" firstStartedPulling="2026-02-17 21:10:28.48182174 +0000 UTC m=+3703.773520071" lastFinishedPulling="2026-02-17 21:10:28.93653919 +0000 UTC m=+3704.228237541" observedRunningTime="2026-02-17 21:10:29.517061743 +0000 UTC m=+3704.808760074" watchObservedRunningTime="2026-02-17 21:10:29.521324338 +0000 UTC m=+3704.813022649" Feb 17 21:10:36 crc kubenswrapper[4793]: I0217 21:10:36.539242 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:10:36 crc kubenswrapper[4793]: E0217 21:10:36.540345 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:10:38 crc kubenswrapper[4793]: I0217 21:10:38.539991 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:10:38 crc kubenswrapper[4793]: E0217 21:10:38.540870 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:47 crc kubenswrapper[4793]: I0217 21:10:47.538846 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:10:47 crc kubenswrapper[4793]: E0217 21:10:47.539749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:10:53 crc kubenswrapper[4793]: I0217 21:10:53.539626 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:10:53 crc kubenswrapper[4793]: E0217 21:10:53.542536 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:10:58 crc kubenswrapper[4793]: I0217 21:10:58.538383 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:10:58 crc kubenswrapper[4793]: E0217 21:10:58.539328 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:11:05 crc kubenswrapper[4793]: I0217 21:11:05.559753 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:11:05 crc kubenswrapper[4793]: E0217 21:11:05.560784 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:11:09 crc kubenswrapper[4793]: I0217 21:11:09.538743 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:11:09 crc kubenswrapper[4793]: E0217 21:11:09.539772 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:11:17 crc kubenswrapper[4793]: I0217 21:11:17.538449 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:11:17 crc kubenswrapper[4793]: E0217 21:11:17.539157 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:11:20 crc kubenswrapper[4793]: I0217 21:11:20.539781 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:11:20 crc kubenswrapper[4793]: E0217 21:11:20.541178 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:11:28 crc kubenswrapper[4793]: I0217 21:11:28.539283 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:11:28 crc kubenswrapper[4793]: E0217 21:11:28.540134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:11:31 crc kubenswrapper[4793]: I0217 21:11:31.541053 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:11:31 crc kubenswrapper[4793]: E0217 21:11:31.541617 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:11:40 crc kubenswrapper[4793]: I0217 21:11:40.539164 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:11:40 crc kubenswrapper[4793]: E0217 21:11:40.540050 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:11:41 crc kubenswrapper[4793]: I0217 21:11:41.293059 4793 generic.go:334] "Generic (PLEG): container finished" podID="3243a2e1-dbae-438c-934c-6ecb775b33b0" containerID="3d1753c1aa4d2a10b2d73c3eb9a6da00916288500ffbfd18eb7927f2b8df358d" exitCode=0 Feb 17 21:11:41 crc kubenswrapper[4793]: I0217 21:11:41.293154 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" event={"ID":"3243a2e1-dbae-438c-934c-6ecb775b33b0","Type":"ContainerDied","Data":"3d1753c1aa4d2a10b2d73c3eb9a6da00916288500ffbfd18eb7927f2b8df358d"} Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.876587 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.916571 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-ssh-key-openstack-edpm-ipam\") pod \"3243a2e1-dbae-438c-934c-6ecb775b33b0\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.916660 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-inventory\") pod \"3243a2e1-dbae-438c-934c-6ecb775b33b0\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.916716 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grb8q\" (UniqueName: \"kubernetes.io/projected/3243a2e1-dbae-438c-934c-6ecb775b33b0-kube-api-access-grb8q\") pod \"3243a2e1-dbae-438c-934c-6ecb775b33b0\" (UID: \"3243a2e1-dbae-438c-934c-6ecb775b33b0\") " Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.934149 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3243a2e1-dbae-438c-934c-6ecb775b33b0-kube-api-access-grb8q" (OuterVolumeSpecName: "kube-api-access-grb8q") pod "3243a2e1-dbae-438c-934c-6ecb775b33b0" (UID: "3243a2e1-dbae-438c-934c-6ecb775b33b0"). InnerVolumeSpecName "kube-api-access-grb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.947045 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3243a2e1-dbae-438c-934c-6ecb775b33b0" (UID: "3243a2e1-dbae-438c-934c-6ecb775b33b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:11:42 crc kubenswrapper[4793]: I0217 21:11:42.950659 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-inventory" (OuterVolumeSpecName: "inventory") pod "3243a2e1-dbae-438c-934c-6ecb775b33b0" (UID: "3243a2e1-dbae-438c-934c-6ecb775b33b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.019492 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grb8q\" (UniqueName: \"kubernetes.io/projected/3243a2e1-dbae-438c-934c-6ecb775b33b0-kube-api-access-grb8q\") on node \"crc\" DevicePath \"\"" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.019748 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.019838 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3243a2e1-dbae-438c-934c-6ecb775b33b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.320071 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" event={"ID":"3243a2e1-dbae-438c-934c-6ecb775b33b0","Type":"ContainerDied","Data":"89c5177308b22274f68cfb21f4228a4582968411be015bea5a8e99f501b4be8b"} Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.320472 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c5177308b22274f68cfb21f4228a4582968411be015bea5a8e99f501b4be8b" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.320210 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.442423 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5"] Feb 17 21:11:43 crc kubenswrapper[4793]: E0217 21:11:43.443361 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3243a2e1-dbae-438c-934c-6ecb775b33b0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.446506 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="3243a2e1-dbae-438c-934c-6ecb775b33b0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.446909 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="3243a2e1-dbae-438c-934c-6ecb775b33b0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.448110 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.450347 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.451218 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.451605 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.451801 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.456617 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5"] Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.531469 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k8bc\" (UniqueName: \"kubernetes.io/projected/31fb2610-de86-45b5-adb3-12f0d23f90bd-kube-api-access-9k8bc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.531551 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.531653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.633997 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.634330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k8bc\" (UniqueName: \"kubernetes.io/projected/31fb2610-de86-45b5-adb3-12f0d23f90bd-kube-api-access-9k8bc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.634470 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.640816 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.648517 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.667995 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k8bc\" (UniqueName: \"kubernetes.io/projected/31fb2610-de86-45b5-adb3-12f0d23f90bd-kube-api-access-9k8bc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:43 crc kubenswrapper[4793]: I0217 21:11:43.774790 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:44 crc kubenswrapper[4793]: I0217 21:11:44.200352 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5"] Feb 17 21:11:44 crc kubenswrapper[4793]: I0217 21:11:44.330430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" event={"ID":"31fb2610-de86-45b5-adb3-12f0d23f90bd","Type":"ContainerStarted","Data":"b2e04ca3f3b2a6d138edaa5d6451fcc903d114bd95983fae03a06e5bbe4ae7e8"} Feb 17 21:11:44 crc kubenswrapper[4793]: I0217 21:11:44.539539 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:11:44 crc kubenswrapper[4793]: E0217 21:11:44.540242 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:11:45 crc kubenswrapper[4793]: I0217 21:11:45.344539 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" event={"ID":"31fb2610-de86-45b5-adb3-12f0d23f90bd","Type":"ContainerStarted","Data":"0f18605fdc7f8ca982c18bc332fba70136ed2332a1c5bd034158dd6b4a1d8856"} Feb 17 21:11:45 crc kubenswrapper[4793]: I0217 21:11:45.368330 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" podStartSLOduration=1.794328235 podStartE2EDuration="2.368298927s" podCreationTimestamp="2026-02-17 21:11:43 +0000 UTC" firstStartedPulling="2026-02-17 21:11:44.207390114 +0000 UTC m=+3779.499088415" lastFinishedPulling="2026-02-17 21:11:44.781360756 +0000 UTC m=+3780.073059107" observedRunningTime="2026-02-17 21:11:45.362583677 +0000 UTC m=+3780.654281998" watchObservedRunningTime="2026-02-17 21:11:45.368298927 +0000 UTC m=+3780.659997268" Feb 17 21:11:50 crc kubenswrapper[4793]: I0217 21:11:50.407423 4793 generic.go:334] "Generic (PLEG): container finished" podID="31fb2610-de86-45b5-adb3-12f0d23f90bd" containerID="0f18605fdc7f8ca982c18bc332fba70136ed2332a1c5bd034158dd6b4a1d8856" exitCode=0 Feb 17 21:11:50 crc kubenswrapper[4793]: I0217 21:11:50.407564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" event={"ID":"31fb2610-de86-45b5-adb3-12f0d23f90bd","Type":"ContainerDied","Data":"0f18605fdc7f8ca982c18bc332fba70136ed2332a1c5bd034158dd6b4a1d8856"} Feb 17 21:11:51 crc kubenswrapper[4793]: I0217 21:11:51.539969 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:11:51 crc kubenswrapper[4793]: E0217 21:11:51.540672 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:11:51 crc kubenswrapper[4793]: I0217 21:11:51.929349 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.114227 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-ssh-key-openstack-edpm-ipam\") pod \"31fb2610-de86-45b5-adb3-12f0d23f90bd\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.114339 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-inventory\") pod \"31fb2610-de86-45b5-adb3-12f0d23f90bd\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.114461 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k8bc\" (UniqueName: \"kubernetes.io/projected/31fb2610-de86-45b5-adb3-12f0d23f90bd-kube-api-access-9k8bc\") pod \"31fb2610-de86-45b5-adb3-12f0d23f90bd\" (UID: \"31fb2610-de86-45b5-adb3-12f0d23f90bd\") " Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.124338 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fb2610-de86-45b5-adb3-12f0d23f90bd-kube-api-access-9k8bc" (OuterVolumeSpecName: "kube-api-access-9k8bc") pod "31fb2610-de86-45b5-adb3-12f0d23f90bd" (UID: "31fb2610-de86-45b5-adb3-12f0d23f90bd"). InnerVolumeSpecName "kube-api-access-9k8bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.161795 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "31fb2610-de86-45b5-adb3-12f0d23f90bd" (UID: "31fb2610-de86-45b5-adb3-12f0d23f90bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.170554 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-inventory" (OuterVolumeSpecName: "inventory") pod "31fb2610-de86-45b5-adb3-12f0d23f90bd" (UID: "31fb2610-de86-45b5-adb3-12f0d23f90bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.217573 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.217622 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31fb2610-de86-45b5-adb3-12f0d23f90bd-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.217643 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k8bc\" (UniqueName: \"kubernetes.io/projected/31fb2610-de86-45b5-adb3-12f0d23f90bd-kube-api-access-9k8bc\") on node \"crc\" DevicePath \"\"" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.428033 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" event={"ID":"31fb2610-de86-45b5-adb3-12f0d23f90bd","Type":"ContainerDied","Data":"b2e04ca3f3b2a6d138edaa5d6451fcc903d114bd95983fae03a06e5bbe4ae7e8"} Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.428353 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e04ca3f3b2a6d138edaa5d6451fcc903d114bd95983fae03a06e5bbe4ae7e8" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.428099 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.633734 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc"] Feb 17 21:11:52 crc kubenswrapper[4793]: E0217 21:11:52.634205 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31fb2610-de86-45b5-adb3-12f0d23f90bd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.634223 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31fb2610-de86-45b5-adb3-12f0d23f90bd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.634493 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="31fb2610-de86-45b5-adb3-12f0d23f90bd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.635285 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.638025 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.639056 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.639912 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.644438 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.645636 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc"] Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.742922 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.743050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbqq\" (UniqueName: \"kubernetes.io/projected/6398b432-f52f-4de5-b001-8252e5cfaf08-kube-api-access-zlbqq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.743177 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.845241 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.845325 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.845384 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbqq\" (UniqueName: \"kubernetes.io/projected/6398b432-f52f-4de5-b001-8252e5cfaf08-kube-api-access-zlbqq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.853278 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.862252 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.869410 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbqq\" (UniqueName: \"kubernetes.io/projected/6398b432-f52f-4de5-b001-8252e5cfaf08-kube-api-access-zlbqq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dh7jc\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:52 crc kubenswrapper[4793]: I0217 21:11:52.960023 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:11:53 crc kubenswrapper[4793]: I0217 21:11:53.588842 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc"] Feb 17 21:11:54 crc kubenswrapper[4793]: I0217 21:11:54.453979 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" event={"ID":"6398b432-f52f-4de5-b001-8252e5cfaf08","Type":"ContainerStarted","Data":"95823d24d61fec2b2eb26b2b7fd55d2f0e2727bebb70e1a598116242e47579f8"} Feb 17 21:11:54 crc kubenswrapper[4793]: I0217 21:11:54.454408 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" event={"ID":"6398b432-f52f-4de5-b001-8252e5cfaf08","Type":"ContainerStarted","Data":"68caa5731a8551961cc828d42f24930d142fc2f75d7ad59b86b75dde4dae6afa"} Feb 17 21:11:54 crc kubenswrapper[4793]: I0217 21:11:54.481289 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" podStartSLOduration=2.061221915 podStartE2EDuration="2.481254233s" podCreationTimestamp="2026-02-17 21:11:52 +0000 UTC" firstStartedPulling="2026-02-17 21:11:53.588376439 +0000 UTC m=+3788.880074750" lastFinishedPulling="2026-02-17 21:11:54.008408757 +0000 UTC m=+3789.300107068" observedRunningTime="2026-02-17 21:11:54.477851639 +0000 UTC m=+3789.769549990" watchObservedRunningTime="2026-02-17 21:11:54.481254233 +0000 UTC m=+3789.772952614" Feb 17 21:11:58 crc kubenswrapper[4793]: I0217 21:11:58.539159 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:11:58 crc kubenswrapper[4793]: E0217 21:11:58.540022 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:12:06 crc kubenswrapper[4793]: I0217 21:12:06.539222 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:12:06 crc kubenswrapper[4793]: E0217 21:12:06.540218 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:12:10 crc kubenswrapper[4793]: I0217 21:12:10.539179 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:12:10 crc kubenswrapper[4793]: E0217 21:12:10.539619 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:12:17 crc kubenswrapper[4793]: I0217 21:12:17.539172 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:12:17 crc kubenswrapper[4793]: E0217 21:12:17.539752 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:12:21 crc kubenswrapper[4793]: I0217 21:12:21.538851 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:12:21 crc kubenswrapper[4793]: E0217 21:12:21.539490 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:12:31 crc kubenswrapper[4793]: I0217 21:12:31.539635 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:12:31 crc kubenswrapper[4793]: E0217 21:12:31.540366 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:12:32 crc kubenswrapper[4793]: I0217 21:12:32.539813 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:12:32 crc kubenswrapper[4793]: E0217 21:12:32.540179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:12:35 crc kubenswrapper[4793]: I0217 21:12:35.873098 4793 generic.go:334] "Generic (PLEG): container finished" podID="6398b432-f52f-4de5-b001-8252e5cfaf08" containerID="95823d24d61fec2b2eb26b2b7fd55d2f0e2727bebb70e1a598116242e47579f8" exitCode=0 Feb 17 21:12:35 crc kubenswrapper[4793]: I0217 21:12:35.873239 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" event={"ID":"6398b432-f52f-4de5-b001-8252e5cfaf08","Type":"ContainerDied","Data":"95823d24d61fec2b2eb26b2b7fd55d2f0e2727bebb70e1a598116242e47579f8"} Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.374482 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.492235 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-inventory\") pod \"6398b432-f52f-4de5-b001-8252e5cfaf08\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.492522 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlbqq\" (UniqueName: \"kubernetes.io/projected/6398b432-f52f-4de5-b001-8252e5cfaf08-kube-api-access-zlbqq\") pod \"6398b432-f52f-4de5-b001-8252e5cfaf08\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.492633 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-ssh-key-openstack-edpm-ipam\") pod \"6398b432-f52f-4de5-b001-8252e5cfaf08\" (UID: \"6398b432-f52f-4de5-b001-8252e5cfaf08\") " Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.503132 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6398b432-f52f-4de5-b001-8252e5cfaf08-kube-api-access-zlbqq" (OuterVolumeSpecName: "kube-api-access-zlbqq") pod "6398b432-f52f-4de5-b001-8252e5cfaf08" (UID: "6398b432-f52f-4de5-b001-8252e5cfaf08"). InnerVolumeSpecName "kube-api-access-zlbqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.532161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6398b432-f52f-4de5-b001-8252e5cfaf08" (UID: "6398b432-f52f-4de5-b001-8252e5cfaf08"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.543860 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-inventory" (OuterVolumeSpecName: "inventory") pod "6398b432-f52f-4de5-b001-8252e5cfaf08" (UID: "6398b432-f52f-4de5-b001-8252e5cfaf08"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.595624 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlbqq\" (UniqueName: \"kubernetes.io/projected/6398b432-f52f-4de5-b001-8252e5cfaf08-kube-api-access-zlbqq\") on node \"crc\" DevicePath \"\"" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.596023 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.596037 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6398b432-f52f-4de5-b001-8252e5cfaf08-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.892640 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" event={"ID":"6398b432-f52f-4de5-b001-8252e5cfaf08","Type":"ContainerDied","Data":"68caa5731a8551961cc828d42f24930d142fc2f75d7ad59b86b75dde4dae6afa"} Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.892681 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68caa5731a8551961cc828d42f24930d142fc2f75d7ad59b86b75dde4dae6afa" Feb 17 21:12:37 crc kubenswrapper[4793]: I0217 21:12:37.892756 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dh7jc" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.004390 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq"] Feb 17 21:12:38 crc kubenswrapper[4793]: E0217 21:12:38.004820 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6398b432-f52f-4de5-b001-8252e5cfaf08" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.004837 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6398b432-f52f-4de5-b001-8252e5cfaf08" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.005022 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6398b432-f52f-4de5-b001-8252e5cfaf08" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.005662 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.009126 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.009778 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.010053 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.010429 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.028221 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq"] Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.106172 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvf9h\" (UniqueName: \"kubernetes.io/projected/2b7973c3-0186-41cc-8641-69e4c0ad60b6-kube-api-access-nvf9h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.106269 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.106316 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.208330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvf9h\" (UniqueName: \"kubernetes.io/projected/2b7973c3-0186-41cc-8641-69e4c0ad60b6-kube-api-access-nvf9h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.208441 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.208488 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.213340 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.221825 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.241323 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvf9h\" (UniqueName: \"kubernetes.io/projected/2b7973c3-0186-41cc-8641-69e4c0ad60b6-kube-api-access-nvf9h\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.322363 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:12:38 crc kubenswrapper[4793]: I0217 21:12:38.964441 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq"] Feb 17 21:12:39 crc kubenswrapper[4793]: I0217 21:12:39.919447 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" event={"ID":"2b7973c3-0186-41cc-8641-69e4c0ad60b6","Type":"ContainerStarted","Data":"8a553f7f22f4d1df64a712c106afb1a938a9aec73904c875e94121a9fe6e023a"} Feb 17 21:12:39 crc kubenswrapper[4793]: I0217 21:12:39.919847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" event={"ID":"2b7973c3-0186-41cc-8641-69e4c0ad60b6","Type":"ContainerStarted","Data":"0047580dddf9dcec721c9d453999cf3652eb32f7365d79989f2757ddf5dee97d"} Feb 17 21:12:39 crc kubenswrapper[4793]: I0217 21:12:39.959669 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" podStartSLOduration=2.528823523 podStartE2EDuration="2.959638526s" podCreationTimestamp="2026-02-17 21:12:37 +0000 UTC" firstStartedPulling="2026-02-17 21:12:38.969384629 +0000 UTC m=+3834.261082940" lastFinishedPulling="2026-02-17 21:12:39.400199632 +0000 UTC m=+3834.691897943" observedRunningTime="2026-02-17 21:12:39.950993994 +0000 UTC m=+3835.242692335" watchObservedRunningTime="2026-02-17 21:12:39.959638526 +0000 UTC m=+3835.251336867" Feb 17 21:12:42 crc kubenswrapper[4793]: I0217 21:12:42.539128 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:12:42 crc kubenswrapper[4793]: E0217 21:12:42.539834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:12:44 crc kubenswrapper[4793]: I0217 21:12:44.538603 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:12:44 crc kubenswrapper[4793]: E0217 21:12:44.539371 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:12:57 crc kubenswrapper[4793]: I0217 21:12:57.538633 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:12:57 crc kubenswrapper[4793]: E0217 21:12:57.539403 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:12:57 crc kubenswrapper[4793]: I0217 21:12:57.540193 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:12:58 crc kubenswrapper[4793]: I0217 21:12:58.112861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"08bd50edbae004fbadbb4be27d654951e19a5a66ae5db214f1828f62d5108ea6"} Feb 17 21:13:08 crc kubenswrapper[4793]: I0217 21:13:08.539865 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:13:08 crc kubenswrapper[4793]: E0217 21:13:08.540899 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:13:23 crc kubenswrapper[4793]: I0217 21:13:23.540402 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:13:23 crc kubenswrapper[4793]: E0217 21:13:23.541404 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:13:34 crc kubenswrapper[4793]: I0217 21:13:34.509841 4793 generic.go:334] "Generic (PLEG): container finished" podID="2b7973c3-0186-41cc-8641-69e4c0ad60b6" containerID="8a553f7f22f4d1df64a712c106afb1a938a9aec73904c875e94121a9fe6e023a" exitCode=0 Feb 17 21:13:34 crc kubenswrapper[4793]: I0217 21:13:34.509950 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" event={"ID":"2b7973c3-0186-41cc-8641-69e4c0ad60b6","Type":"ContainerDied","Data":"8a553f7f22f4d1df64a712c106afb1a938a9aec73904c875e94121a9fe6e023a"} Feb 17 21:13:34 crc kubenswrapper[4793]: I0217 21:13:34.539179 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:13:34 crc kubenswrapper[4793]: E0217 21:13:34.539536 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.085991 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.172025 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory\") pod \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.172089 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-ssh-key-openstack-edpm-ipam\") pod \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.172115 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvf9h\" (UniqueName: \"kubernetes.io/projected/2b7973c3-0186-41cc-8641-69e4c0ad60b6-kube-api-access-nvf9h\") pod \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.178010 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7973c3-0186-41cc-8641-69e4c0ad60b6-kube-api-access-nvf9h" (OuterVolumeSpecName: "kube-api-access-nvf9h") pod "2b7973c3-0186-41cc-8641-69e4c0ad60b6" (UID: "2b7973c3-0186-41cc-8641-69e4c0ad60b6"). InnerVolumeSpecName "kube-api-access-nvf9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:13:36 crc kubenswrapper[4793]: E0217 21:13:36.197764 4793 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory podName:2b7973c3-0186-41cc-8641-69e4c0ad60b6 nodeName:}" failed. No retries permitted until 2026-02-17 21:13:36.697679301 +0000 UTC m=+3891.989377672 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory") pod "2b7973c3-0186-41cc-8641-69e4c0ad60b6" (UID: "2b7973c3-0186-41cc-8641-69e4c0ad60b6") : error deleting /var/lib/kubelet/pods/2b7973c3-0186-41cc-8641-69e4c0ad60b6/volume-subpaths: remove /var/lib/kubelet/pods/2b7973c3-0186-41cc-8641-69e4c0ad60b6/volume-subpaths: no such file or directory Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.200980 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b7973c3-0186-41cc-8641-69e4c0ad60b6" (UID: "2b7973c3-0186-41cc-8641-69e4c0ad60b6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.276539 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.276602 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvf9h\" (UniqueName: \"kubernetes.io/projected/2b7973c3-0186-41cc-8641-69e4c0ad60b6-kube-api-access-nvf9h\") on node \"crc\" DevicePath \"\"" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.531412 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" event={"ID":"2b7973c3-0186-41cc-8641-69e4c0ad60b6","Type":"ContainerDied","Data":"0047580dddf9dcec721c9d453999cf3652eb32f7365d79989f2757ddf5dee97d"} Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.532026 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0047580dddf9dcec721c9d453999cf3652eb32f7365d79989f2757ddf5dee97d" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.531504 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.643584 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j77qz"] Feb 17 21:13:36 crc kubenswrapper[4793]: E0217 21:13:36.644174 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7973c3-0186-41cc-8641-69e4c0ad60b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.644197 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7973c3-0186-41cc-8641-69e4c0ad60b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.644473 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7973c3-0186-41cc-8641-69e4c0ad60b6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.645334 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.653081 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j77qz"] Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.689200 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.689287 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.689309 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb9sv\" (UniqueName: \"kubernetes.io/projected/b087a913-e45a-4627-a95b-7c2c6edc0f23-kube-api-access-bb9sv\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.790973 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory\") pod \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\" (UID: \"2b7973c3-0186-41cc-8641-69e4c0ad60b6\") " Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.792220 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.792287 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.792313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb9sv\" (UniqueName: \"kubernetes.io/projected/b087a913-e45a-4627-a95b-7c2c6edc0f23-kube-api-access-bb9sv\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.797783 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory" (OuterVolumeSpecName: "inventory") pod "2b7973c3-0186-41cc-8641-69e4c0ad60b6" (UID: "2b7973c3-0186-41cc-8641-69e4c0ad60b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.798965 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.799196 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.807696 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb9sv\" (UniqueName: \"kubernetes.io/projected/b087a913-e45a-4627-a95b-7c2c6edc0f23-kube-api-access-bb9sv\") pod \"ssh-known-hosts-edpm-deployment-j77qz\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.894737 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7973c3-0186-41cc-8641-69e4c0ad60b6-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:13:36 crc kubenswrapper[4793]: I0217 21:13:36.960036 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:37 crc kubenswrapper[4793]: I0217 21:13:37.552783 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j77qz"] Feb 17 21:13:38 crc kubenswrapper[4793]: I0217 21:13:38.083409 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:13:38 crc kubenswrapper[4793]: I0217 21:13:38.562404 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" event={"ID":"b087a913-e45a-4627-a95b-7c2c6edc0f23","Type":"ContainerStarted","Data":"71e96bb23092d522be118093265937e103067cda0b98eb3e2c4b303d41d312ae"} Feb 17 21:13:39 crc kubenswrapper[4793]: I0217 21:13:39.577134 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" event={"ID":"b087a913-e45a-4627-a95b-7c2c6edc0f23","Type":"ContainerStarted","Data":"c9901490c01348318cb88a06794107ed00b02434f94cbfe99d5abe1bef53cf13"} Feb 17 21:13:46 crc kubenswrapper[4793]: I0217 21:13:46.640947 4793 generic.go:334] "Generic (PLEG): container finished" podID="b087a913-e45a-4627-a95b-7c2c6edc0f23" containerID="c9901490c01348318cb88a06794107ed00b02434f94cbfe99d5abe1bef53cf13" exitCode=0 Feb 17 21:13:46 crc kubenswrapper[4793]: I0217 21:13:46.641463 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" event={"ID":"b087a913-e45a-4627-a95b-7c2c6edc0f23","Type":"ContainerDied","Data":"c9901490c01348318cb88a06794107ed00b02434f94cbfe99d5abe1bef53cf13"} Feb 17 21:13:47 crc kubenswrapper[4793]: I0217 21:13:47.539382 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:13:47 crc kubenswrapper[4793]: E0217 21:13:47.540353 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.120057 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.233635 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-inventory-0\") pod \"b087a913-e45a-4627-a95b-7c2c6edc0f23\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.233682 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-ssh-key-openstack-edpm-ipam\") pod \"b087a913-e45a-4627-a95b-7c2c6edc0f23\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.233785 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb9sv\" (UniqueName: \"kubernetes.io/projected/b087a913-e45a-4627-a95b-7c2c6edc0f23-kube-api-access-bb9sv\") pod \"b087a913-e45a-4627-a95b-7c2c6edc0f23\" (UID: \"b087a913-e45a-4627-a95b-7c2c6edc0f23\") " Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.243043 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b087a913-e45a-4627-a95b-7c2c6edc0f23-kube-api-access-bb9sv" (OuterVolumeSpecName: "kube-api-access-bb9sv") pod "b087a913-e45a-4627-a95b-7c2c6edc0f23" (UID: "b087a913-e45a-4627-a95b-7c2c6edc0f23"). InnerVolumeSpecName "kube-api-access-bb9sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.270734 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b087a913-e45a-4627-a95b-7c2c6edc0f23" (UID: "b087a913-e45a-4627-a95b-7c2c6edc0f23"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.271157 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b087a913-e45a-4627-a95b-7c2c6edc0f23" (UID: "b087a913-e45a-4627-a95b-7c2c6edc0f23"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.335908 4793 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.335946 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b087a913-e45a-4627-a95b-7c2c6edc0f23-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.335956 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb9sv\" (UniqueName: \"kubernetes.io/projected/b087a913-e45a-4627-a95b-7c2c6edc0f23-kube-api-access-bb9sv\") on node \"crc\" DevicePath \"\"" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.669053 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" event={"ID":"b087a913-e45a-4627-a95b-7c2c6edc0f23","Type":"ContainerDied","Data":"71e96bb23092d522be118093265937e103067cda0b98eb3e2c4b303d41d312ae"} Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.669106 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e96bb23092d522be118093265937e103067cda0b98eb3e2c4b303d41d312ae" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.669125 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j77qz" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.767156 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7"] Feb 17 21:13:48 crc kubenswrapper[4793]: E0217 21:13:48.767878 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b087a913-e45a-4627-a95b-7c2c6edc0f23" containerName="ssh-known-hosts-edpm-deployment" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.767901 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="b087a913-e45a-4627-a95b-7c2c6edc0f23" containerName="ssh-known-hosts-edpm-deployment" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.768198 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="b087a913-e45a-4627-a95b-7c2c6edc0f23" containerName="ssh-known-hosts-edpm-deployment" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.769650 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.780069 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7"] Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.791051 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.791321 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.791453 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.791588 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.952306 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q77gp\" (UniqueName: \"kubernetes.io/projected/7ce29a8f-d2fd-4315-9264-ba243fba16ee-kube-api-access-q77gp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.952570 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:48 crc kubenswrapper[4793]: I0217 21:13:48.952611 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.054547 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q77gp\" (UniqueName: \"kubernetes.io/projected/7ce29a8f-d2fd-4315-9264-ba243fba16ee-kube-api-access-q77gp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.054640 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.054785 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.061622 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.068495 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.671474 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q77gp\" (UniqueName: \"kubernetes.io/projected/7ce29a8f-d2fd-4315-9264-ba243fba16ee-kube-api-access-q77gp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6n5b7\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:49 crc kubenswrapper[4793]: I0217 21:13:49.708257 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:13:50 crc kubenswrapper[4793]: I0217 21:13:50.418210 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7"] Feb 17 21:13:50 crc kubenswrapper[4793]: I0217 21:13:50.700666 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" event={"ID":"7ce29a8f-d2fd-4315-9264-ba243fba16ee","Type":"ContainerStarted","Data":"d4166b1be2709ca2c86d7ecce8c883d1da1a19749963adc6a0466996560fdbf1"} Feb 17 21:13:51 crc kubenswrapper[4793]: I0217 21:13:51.714656 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" event={"ID":"7ce29a8f-d2fd-4315-9264-ba243fba16ee","Type":"ContainerStarted","Data":"4a425907e31539569cc220998a4307478e0aa7cf1d74d4118c22b7efc90a3176"} Feb 17 21:13:51 crc kubenswrapper[4793]: I0217 21:13:51.752384 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" podStartSLOduration=3.285598717 podStartE2EDuration="3.752360643s" podCreationTimestamp="2026-02-17 21:13:48 +0000 UTC" firstStartedPulling="2026-02-17 21:13:50.423555564 +0000 UTC m=+3905.715253875" lastFinishedPulling="2026-02-17 21:13:50.89031749 +0000 UTC m=+3906.182015801" observedRunningTime="2026-02-17 21:13:51.745394832 +0000 UTC m=+3907.037093143" watchObservedRunningTime="2026-02-17 21:13:51.752360643 +0000 UTC m=+3907.044058954" Feb 17 21:14:00 crc kubenswrapper[4793]: I0217 21:14:00.810438 4793 generic.go:334] "Generic (PLEG): container finished" podID="7ce29a8f-d2fd-4315-9264-ba243fba16ee" containerID="4a425907e31539569cc220998a4307478e0aa7cf1d74d4118c22b7efc90a3176" exitCode=0 Feb 17 21:14:00 crc kubenswrapper[4793]: I0217 21:14:00.810525 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" event={"ID":"7ce29a8f-d2fd-4315-9264-ba243fba16ee","Type":"ContainerDied","Data":"4a425907e31539569cc220998a4307478e0aa7cf1d74d4118c22b7efc90a3176"} Feb 17 21:14:01 crc kubenswrapper[4793]: I0217 21:14:01.538781 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:14:01 crc kubenswrapper[4793]: E0217 21:14:01.539392 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.428107 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.544085 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q77gp\" (UniqueName: \"kubernetes.io/projected/7ce29a8f-d2fd-4315-9264-ba243fba16ee-kube-api-access-q77gp\") pod \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.544205 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-inventory\") pod \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.544342 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-ssh-key-openstack-edpm-ipam\") pod \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\" (UID: \"7ce29a8f-d2fd-4315-9264-ba243fba16ee\") " Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.550481 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce29a8f-d2fd-4315-9264-ba243fba16ee-kube-api-access-q77gp" (OuterVolumeSpecName: "kube-api-access-q77gp") pod "7ce29a8f-d2fd-4315-9264-ba243fba16ee" (UID: "7ce29a8f-d2fd-4315-9264-ba243fba16ee"). InnerVolumeSpecName "kube-api-access-q77gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.577483 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-inventory" (OuterVolumeSpecName: "inventory") pod "7ce29a8f-d2fd-4315-9264-ba243fba16ee" (UID: "7ce29a8f-d2fd-4315-9264-ba243fba16ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.585752 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ce29a8f-d2fd-4315-9264-ba243fba16ee" (UID: "7ce29a8f-d2fd-4315-9264-ba243fba16ee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.646507 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q77gp\" (UniqueName: \"kubernetes.io/projected/7ce29a8f-d2fd-4315-9264-ba243fba16ee-kube-api-access-q77gp\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.646542 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.646560 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce29a8f-d2fd-4315-9264-ba243fba16ee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.832737 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" event={"ID":"7ce29a8f-d2fd-4315-9264-ba243fba16ee","Type":"ContainerDied","Data":"d4166b1be2709ca2c86d7ecce8c883d1da1a19749963adc6a0466996560fdbf1"} Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.833214 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4166b1be2709ca2c86d7ecce8c883d1da1a19749963adc6a0466996560fdbf1" Feb 17 21:14:02 crc kubenswrapper[4793]: I0217 21:14:02.832815 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6n5b7" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.021431 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2"] Feb 17 21:14:03 crc kubenswrapper[4793]: E0217 21:14:03.021961 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce29a8f-d2fd-4315-9264-ba243fba16ee" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.021984 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce29a8f-d2fd-4315-9264-ba243fba16ee" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.022283 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce29a8f-d2fd-4315-9264-ba243fba16ee" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.023143 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.025411 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.025728 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.026266 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.044891 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.048161 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2"] Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.156980 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.157288 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.157500 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4g8\" (UniqueName: \"kubernetes.io/projected/ba576b4a-10f9-4e61-bb10-0b776f724706-kube-api-access-9x4g8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.259671 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.260052 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4g8\" (UniqueName: \"kubernetes.io/projected/ba576b4a-10f9-4e61-bb10-0b776f724706-kube-api-access-9x4g8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.260295 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.264101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.266641 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.277846 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4g8\" (UniqueName: \"kubernetes.io/projected/ba576b4a-10f9-4e61-bb10-0b776f724706-kube-api-access-9x4g8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.348572 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.748714 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2"] Feb 17 21:14:03 crc kubenswrapper[4793]: I0217 21:14:03.841124 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" event={"ID":"ba576b4a-10f9-4e61-bb10-0b776f724706","Type":"ContainerStarted","Data":"c3dd061d6252c473c17ed61d95274c7f139e3319386756eb9156973716d60978"} Feb 17 21:14:04 crc kubenswrapper[4793]: I0217 21:14:04.855303 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" event={"ID":"ba576b4a-10f9-4e61-bb10-0b776f724706","Type":"ContainerStarted","Data":"b64f334d7c177d524bb2b26e84384ccd022a3e8b8f6a29e872661a01626a2320"} Feb 17 21:14:04 crc kubenswrapper[4793]: I0217 21:14:04.879293 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" podStartSLOduration=1.445560386 podStartE2EDuration="1.87927318s" podCreationTimestamp="2026-02-17 21:14:03 +0000 UTC" firstStartedPulling="2026-02-17 21:14:03.769549705 +0000 UTC m=+3919.061248016" lastFinishedPulling="2026-02-17 21:14:04.203262479 +0000 UTC m=+3919.494960810" observedRunningTime="2026-02-17 21:14:04.877910036 +0000 UTC m=+3920.169608377" watchObservedRunningTime="2026-02-17 21:14:04.87927318 +0000 UTC m=+3920.170971501" Feb 17 21:14:13 crc kubenswrapper[4793]: I0217 21:14:13.539066 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:14:13 crc kubenswrapper[4793]: E0217 21:14:13.540023 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:14:13 crc kubenswrapper[4793]: I0217 21:14:13.942523 4793 generic.go:334] "Generic (PLEG): container finished" podID="ba576b4a-10f9-4e61-bb10-0b776f724706" containerID="b64f334d7c177d524bb2b26e84384ccd022a3e8b8f6a29e872661a01626a2320" exitCode=0 Feb 17 21:14:13 crc kubenswrapper[4793]: I0217 21:14:13.942649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" event={"ID":"ba576b4a-10f9-4e61-bb10-0b776f724706","Type":"ContainerDied","Data":"b64f334d7c177d524bb2b26e84384ccd022a3e8b8f6a29e872661a01626a2320"} Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.908802 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.957704 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-ssh-key-openstack-edpm-ipam\") pod \"ba576b4a-10f9-4e61-bb10-0b776f724706\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.957740 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x4g8\" (UniqueName: \"kubernetes.io/projected/ba576b4a-10f9-4e61-bb10-0b776f724706-kube-api-access-9x4g8\") pod \"ba576b4a-10f9-4e61-bb10-0b776f724706\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.957887 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-inventory\") pod \"ba576b4a-10f9-4e61-bb10-0b776f724706\" (UID: \"ba576b4a-10f9-4e61-bb10-0b776f724706\") " Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.967014 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba576b4a-10f9-4e61-bb10-0b776f724706-kube-api-access-9x4g8" (OuterVolumeSpecName: "kube-api-access-9x4g8") pod "ba576b4a-10f9-4e61-bb10-0b776f724706" (UID: "ba576b4a-10f9-4e61-bb10-0b776f724706"). InnerVolumeSpecName "kube-api-access-9x4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.983156 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.983158 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2" event={"ID":"ba576b4a-10f9-4e61-bb10-0b776f724706","Type":"ContainerDied","Data":"c3dd061d6252c473c17ed61d95274c7f139e3319386756eb9156973716d60978"} Feb 17 21:14:15 crc kubenswrapper[4793]: I0217 21:14:15.983560 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3dd061d6252c473c17ed61d95274c7f139e3319386756eb9156973716d60978" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.010934 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba576b4a-10f9-4e61-bb10-0b776f724706" (UID: "ba576b4a-10f9-4e61-bb10-0b776f724706"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.014606 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-inventory" (OuterVolumeSpecName: "inventory") pod "ba576b4a-10f9-4e61-bb10-0b776f724706" (UID: "ba576b4a-10f9-4e61-bb10-0b776f724706"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.060195 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.060366 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x4g8\" (UniqueName: \"kubernetes.io/projected/ba576b4a-10f9-4e61-bb10-0b776f724706-kube-api-access-9x4g8\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.060447 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba576b4a-10f9-4e61-bb10-0b776f724706-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.118511 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck"] Feb 17 21:14:16 crc kubenswrapper[4793]: E0217 21:14:16.119416 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba576b4a-10f9-4e61-bb10-0b776f724706" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.119507 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba576b4a-10f9-4e61-bb10-0b776f724706" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.119792 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba576b4a-10f9-4e61-bb10-0b776f724706" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.120475 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.122837 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.123281 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.123360 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.123934 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.144142 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck"] Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162190 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162255 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162279 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162325 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162407 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162435 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162489 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162564 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162685 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.162964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.163004 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.163038 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.163131 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.163217 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrxg8\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-kube-api-access-rrxg8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265415 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265460 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265485 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265521 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265561 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrxg8\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-kube-api-access-rrxg8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265588 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265609 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265630 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265715 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265743 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265823 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.265853 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.270156 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.270338 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.271121 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.271368 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.271980 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.272294 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.272425 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.272810 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.272831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.273087 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.273607 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.275033 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.282419 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.285740 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrxg8\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-kube-api-access-rrxg8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jbck\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:16 crc kubenswrapper[4793]: I0217 21:14:16.444305 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:14:17 crc kubenswrapper[4793]: I0217 21:14:17.064765 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck"] Feb 17 21:14:18 crc kubenswrapper[4793]: I0217 21:14:18.002237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" event={"ID":"5515ee23-b3b8-4450-a64d-62636c818464","Type":"ContainerStarted","Data":"16aeba38ff30541d6eb38615033d34449f4eac79f874bf1fb51558d27ca9357c"} Feb 17 21:14:18 crc kubenswrapper[4793]: I0217 21:14:18.002901 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" event={"ID":"5515ee23-b3b8-4450-a64d-62636c818464","Type":"ContainerStarted","Data":"a43487e79027464316ad65730594a222522b00d41bafafc9088e749c8eb2737b"} Feb 17 21:14:18 crc kubenswrapper[4793]: I0217 21:14:18.027188 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" podStartSLOduration=1.6456164580000001 podStartE2EDuration="2.027164599s" podCreationTimestamp="2026-02-17 21:14:16 +0000 UTC" firstStartedPulling="2026-02-17 21:14:17.068221362 +0000 UTC m=+3932.359919673" lastFinishedPulling="2026-02-17 21:14:17.449769493 +0000 UTC m=+3932.741467814" observedRunningTime="2026-02-17 21:14:18.019763207 +0000 UTC m=+3933.311461518" watchObservedRunningTime="2026-02-17 21:14:18.027164599 +0000 UTC m=+3933.318862910" Feb 17 21:14:25 crc kubenswrapper[4793]: I0217 21:14:25.544482 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:14:25 crc kubenswrapper[4793]: E0217 21:14:25.546199 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.584329 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmpq"] Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.587418 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.597738 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmpq"] Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.650950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-utilities\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.651011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqx6n\" (UniqueName: \"kubernetes.io/projected/4abebd64-86e1-429c-99ba-42e5e971a641-kube-api-access-wqx6n\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.651055 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-catalog-content\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.752457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-utilities\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.752767 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqx6n\" (UniqueName: \"kubernetes.io/projected/4abebd64-86e1-429c-99ba-42e5e971a641-kube-api-access-wqx6n\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.752805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-catalog-content\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.753107 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-utilities\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:28 crc kubenswrapper[4793]: I0217 21:14:28.753175 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-catalog-content\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:29 crc kubenswrapper[4793]: I0217 21:14:29.071752 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqx6n\" (UniqueName: \"kubernetes.io/projected/4abebd64-86e1-429c-99ba-42e5e971a641-kube-api-access-wqx6n\") pod \"redhat-marketplace-2bmpq\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:29 crc kubenswrapper[4793]: I0217 21:14:29.220309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:29 crc kubenswrapper[4793]: I0217 21:14:29.701433 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmpq"] Feb 17 21:14:29 crc kubenswrapper[4793]: W0217 21:14:29.713498 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4abebd64_86e1_429c_99ba_42e5e971a641.slice/crio-81f0f281307a587de1415d982c3c1697b1f5c94b95360917dd8a3f7c1b4b51fa WatchSource:0}: Error finding container 81f0f281307a587de1415d982c3c1697b1f5c94b95360917dd8a3f7c1b4b51fa: Status 404 returned error can't find the container with id 81f0f281307a587de1415d982c3c1697b1f5c94b95360917dd8a3f7c1b4b51fa Feb 17 21:14:30 crc kubenswrapper[4793]: I0217 21:14:30.131793 4793 generic.go:334] "Generic (PLEG): container finished" podID="4abebd64-86e1-429c-99ba-42e5e971a641" containerID="849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5" exitCode=0 Feb 17 21:14:30 crc kubenswrapper[4793]: I0217 21:14:30.131840 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerDied","Data":"849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5"} Feb 17 21:14:30 crc kubenswrapper[4793]: I0217 21:14:30.132115 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerStarted","Data":"81f0f281307a587de1415d982c3c1697b1f5c94b95360917dd8a3f7c1b4b51fa"} Feb 17 21:14:33 crc kubenswrapper[4793]: I0217 21:14:33.486319 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerStarted","Data":"c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13"} Feb 17 21:14:34 crc kubenswrapper[4793]: I0217 21:14:34.495292 4793 generic.go:334] "Generic (PLEG): container finished" podID="4abebd64-86e1-429c-99ba-42e5e971a641" containerID="c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13" exitCode=0 Feb 17 21:14:34 crc kubenswrapper[4793]: I0217 21:14:34.495583 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerDied","Data":"c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13"} Feb 17 21:14:35 crc kubenswrapper[4793]: I0217 21:14:35.523046 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerStarted","Data":"c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54"} Feb 17 21:14:35 crc kubenswrapper[4793]: I0217 21:14:35.554919 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2bmpq" podStartSLOduration=2.815647754 podStartE2EDuration="7.554898227s" podCreationTimestamp="2026-02-17 21:14:28 +0000 UTC" firstStartedPulling="2026-02-17 21:14:30.134111696 +0000 UTC m=+3945.425810007" lastFinishedPulling="2026-02-17 21:14:34.873362169 +0000 UTC m=+3950.165060480" observedRunningTime="2026-02-17 21:14:35.549706259 +0000 UTC m=+3950.841404570" watchObservedRunningTime="2026-02-17 21:14:35.554898227 +0000 UTC m=+3950.846596538" Feb 17 21:14:36 crc kubenswrapper[4793]: I0217 21:14:36.538750 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:14:36 crc kubenswrapper[4793]: E0217 21:14:36.539013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:14:39 crc kubenswrapper[4793]: I0217 21:14:39.220503 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:39 crc kubenswrapper[4793]: I0217 21:14:39.221113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:39 crc kubenswrapper[4793]: I0217 21:14:39.298284 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:47 crc kubenswrapper[4793]: I0217 21:14:47.538663 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:14:47 crc kubenswrapper[4793]: E0217 21:14:47.539637 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:14:49 crc kubenswrapper[4793]: I0217 21:14:49.303011 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:49 crc kubenswrapper[4793]: I0217 21:14:49.370473 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmpq"] Feb 17 21:14:49 crc kubenswrapper[4793]: I0217 21:14:49.679664 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2bmpq" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="registry-server" containerID="cri-o://c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54" gracePeriod=2 Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.192740 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.305649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-catalog-content\") pod \"4abebd64-86e1-429c-99ba-42e5e971a641\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.305863 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-utilities\") pod \"4abebd64-86e1-429c-99ba-42e5e971a641\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.306105 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqx6n\" (UniqueName: \"kubernetes.io/projected/4abebd64-86e1-429c-99ba-42e5e971a641-kube-api-access-wqx6n\") pod \"4abebd64-86e1-429c-99ba-42e5e971a641\" (UID: \"4abebd64-86e1-429c-99ba-42e5e971a641\") " Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.306620 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-utilities" (OuterVolumeSpecName: "utilities") pod "4abebd64-86e1-429c-99ba-42e5e971a641" (UID: "4abebd64-86e1-429c-99ba-42e5e971a641"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.306949 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.315960 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4abebd64-86e1-429c-99ba-42e5e971a641-kube-api-access-wqx6n" (OuterVolumeSpecName: "kube-api-access-wqx6n") pod "4abebd64-86e1-429c-99ba-42e5e971a641" (UID: "4abebd64-86e1-429c-99ba-42e5e971a641"). InnerVolumeSpecName "kube-api-access-wqx6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.334988 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4abebd64-86e1-429c-99ba-42e5e971a641" (UID: "4abebd64-86e1-429c-99ba-42e5e971a641"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.408477 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqx6n\" (UniqueName: \"kubernetes.io/projected/4abebd64-86e1-429c-99ba-42e5e971a641-kube-api-access-wqx6n\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.408534 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4abebd64-86e1-429c-99ba-42e5e971a641-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.694375 4793 generic.go:334] "Generic (PLEG): container finished" podID="4abebd64-86e1-429c-99ba-42e5e971a641" containerID="c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54" exitCode=0 Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.694439 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bmpq" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.694475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerDied","Data":"c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54"} Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.695018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bmpq" event={"ID":"4abebd64-86e1-429c-99ba-42e5e971a641","Type":"ContainerDied","Data":"81f0f281307a587de1415d982c3c1697b1f5c94b95360917dd8a3f7c1b4b51fa"} Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.695057 4793 scope.go:117] "RemoveContainer" containerID="c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.728677 4793 scope.go:117] "RemoveContainer" containerID="c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.749851 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmpq"] Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.774213 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bmpq"] Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.784935 4793 scope.go:117] "RemoveContainer" containerID="849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.838471 4793 scope.go:117] "RemoveContainer" containerID="c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54" Feb 17 21:14:50 crc kubenswrapper[4793]: E0217 21:14:50.839129 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54\": container with ID starting with c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54 not found: ID does not exist" containerID="c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.839170 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54"} err="failed to get container status \"c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54\": rpc error: code = NotFound desc = could not find container \"c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54\": container with ID starting with c941f6742306c98f4d38b2d52cfa005ca3f608309034e7352dd64d5cf3be1f54 not found: ID does not exist" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.839199 4793 scope.go:117] "RemoveContainer" containerID="c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13" Feb 17 21:14:50 crc kubenswrapper[4793]: E0217 21:14:50.839457 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13\": container with ID starting with c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13 not found: ID does not exist" containerID="c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.839484 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13"} err="failed to get container status \"c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13\": rpc error: code = NotFound desc = could not find container \"c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13\": container with ID starting with c7031614c7f65d306b2ece86502285770a8ee41bce5a596b42567b096cc20a13 not found: ID does not exist" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.839508 4793 scope.go:117] "RemoveContainer" containerID="849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5" Feb 17 21:14:50 crc kubenswrapper[4793]: E0217 21:14:50.839760 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5\": container with ID starting with 849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5 not found: ID does not exist" containerID="849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5" Feb 17 21:14:50 crc kubenswrapper[4793]: I0217 21:14:50.839790 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5"} err="failed to get container status \"849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5\": rpc error: code = NotFound desc = could not find container \"849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5\": container with ID starting with 849b23b4d1bc5151433b0ff760d557b0e352bcb81f996c55f33748c7873613d5 not found: ID does not exist" Feb 17 21:14:51 crc kubenswrapper[4793]: I0217 21:14:51.561807 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" path="/var/lib/kubelet/pods/4abebd64-86e1-429c-99ba-42e5e971a641/volumes" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.007674 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:15:01 crc kubenswrapper[4793]: E0217 21:15:01.008745 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.084811 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5"] Feb 17 21:15:01 crc kubenswrapper[4793]: E0217 21:15:01.088486 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="registry-server" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.088581 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="registry-server" Feb 17 21:15:01 crc kubenswrapper[4793]: E0217 21:15:01.088658 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="extract-content" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.088741 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="extract-content" Feb 17 21:15:01 crc kubenswrapper[4793]: E0217 21:15:01.088812 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="extract-utilities" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.088887 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="extract-utilities" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.089170 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abebd64-86e1-429c-99ba-42e5e971a641" containerName="registry-server" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.089854 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.091662 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.091679 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.103145 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5"] Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.127325 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79511b9-ebbe-407a-8b6e-8d41a4930281-config-volume\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.127422 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79511b9-ebbe-407a-8b6e-8d41a4930281-secret-volume\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.127519 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9nd\" (UniqueName: \"kubernetes.io/projected/e79511b9-ebbe-407a-8b6e-8d41a4930281-kube-api-access-gr9nd\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.228787 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79511b9-ebbe-407a-8b6e-8d41a4930281-secret-volume\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.228907 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr9nd\" (UniqueName: \"kubernetes.io/projected/e79511b9-ebbe-407a-8b6e-8d41a4930281-kube-api-access-gr9nd\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.229012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79511b9-ebbe-407a-8b6e-8d41a4930281-config-volume\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.229901 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79511b9-ebbe-407a-8b6e-8d41a4930281-config-volume\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.234611 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79511b9-ebbe-407a-8b6e-8d41a4930281-secret-volume\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.246465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr9nd\" (UniqueName: \"kubernetes.io/projected/e79511b9-ebbe-407a-8b6e-8d41a4930281-kube-api-access-gr9nd\") pod \"collect-profiles-29522715-cr7c5\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.412634 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.886308 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5"] Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.940376 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" event={"ID":"e79511b9-ebbe-407a-8b6e-8d41a4930281","Type":"ContainerStarted","Data":"08706d72ce2ab365140ebd3a7f64678b214bb6f9577eeff36b3d491aec5fa9b0"} Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.941904 4793 generic.go:334] "Generic (PLEG): container finished" podID="5515ee23-b3b8-4450-a64d-62636c818464" containerID="16aeba38ff30541d6eb38615033d34449f4eac79f874bf1fb51558d27ca9357c" exitCode=0 Feb 17 21:15:01 crc kubenswrapper[4793]: I0217 21:15:01.941964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" event={"ID":"5515ee23-b3b8-4450-a64d-62636c818464","Type":"ContainerDied","Data":"16aeba38ff30541d6eb38615033d34449f4eac79f874bf1fb51558d27ca9357c"} Feb 17 21:15:02 crc kubenswrapper[4793]: I0217 21:15:02.969802 4793 generic.go:334] "Generic (PLEG): container finished" podID="e79511b9-ebbe-407a-8b6e-8d41a4930281" containerID="5676a22a4bc57fe10ad4902cae7411b8376d6c364757e7c6550f907e7a6d0a63" exitCode=0 Feb 17 21:15:02 crc kubenswrapper[4793]: I0217 21:15:02.969976 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" event={"ID":"e79511b9-ebbe-407a-8b6e-8d41a4930281","Type":"ContainerDied","Data":"5676a22a4bc57fe10ad4902cae7411b8376d6c364757e7c6550f907e7a6d0a63"} Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.543075 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580613 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ovn-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580739 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrxg8\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-kube-api-access-rrxg8\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580774 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-bootstrap-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580815 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-inventory\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580840 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ssh-key-openstack-edpm-ipam\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580864 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-libvirt-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580890 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-repo-setup-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580919 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-telemetry-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.580944 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-ovn-default-certs-0\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.581002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-neutron-metadata-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.581057 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.581117 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-nova-combined-ca-bundle\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.581145 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.581180 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"5515ee23-b3b8-4450-a64d-62636c818464\" (UID: \"5515ee23-b3b8-4450-a64d-62636c818464\") " Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.588464 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.589011 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.589898 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.590281 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.590556 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.594375 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.594764 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.595008 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.594831 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.597261 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-kube-api-access-rrxg8" (OuterVolumeSpecName: "kube-api-access-rrxg8") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "kube-api-access-rrxg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.597352 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.602019 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.643983 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-inventory" (OuterVolumeSpecName: "inventory") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.644508 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5515ee23-b3b8-4450-a64d-62636c818464" (UID: "5515ee23-b3b8-4450-a64d-62636c818464"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683871 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683919 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683931 4793 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683943 4793 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683958 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683971 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683984 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.683997 4793 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.684011 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.684026 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.684040 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.684055 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrxg8\" (UniqueName: \"kubernetes.io/projected/5515ee23-b3b8-4450-a64d-62636c818464-kube-api-access-rrxg8\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.684065 4793 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.684077 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5515ee23-b3b8-4450-a64d-62636c818464-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.991677 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.997081 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jbck" event={"ID":"5515ee23-b3b8-4450-a64d-62636c818464","Type":"ContainerDied","Data":"a43487e79027464316ad65730594a222522b00d41bafafc9088e749c8eb2737b"} Feb 17 21:15:03 crc kubenswrapper[4793]: I0217 21:15:03.997130 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a43487e79027464316ad65730594a222522b00d41bafafc9088e749c8eb2737b" Feb 17 21:15:04 crc kubenswrapper[4793]: E0217 21:15:04.084277 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5515ee23_b3b8_4450_a64d_62636c818464.slice/crio-a43487e79027464316ad65730594a222522b00d41bafafc9088e749c8eb2737b\": RecentStats: unable to find data in memory cache]" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.103507 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf"] Feb 17 21:15:04 crc kubenswrapper[4793]: E0217 21:15:04.103917 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5515ee23-b3b8-4450-a64d-62636c818464" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.103934 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5515ee23-b3b8-4450-a64d-62636c818464" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.104130 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5515ee23-b3b8-4450-a64d-62636c818464" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.104819 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.106931 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.110624 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.110955 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.111795 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.111153 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.129093 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf"] Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.195509 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.195747 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgq6\" (UniqueName: \"kubernetes.io/projected/2b16e893-10e7-4207-9ace-bf1d7bd735bf-kube-api-access-dpgq6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.195816 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.195892 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.195950 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.304833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.304946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.305009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.305045 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpgq6\" (UniqueName: \"kubernetes.io/projected/2b16e893-10e7-4207-9ace-bf1d7bd735bf-kube-api-access-dpgq6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.305096 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.305791 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.312439 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.312552 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.312555 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.320986 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpgq6\" (UniqueName: \"kubernetes.io/projected/2b16e893-10e7-4207-9ace-bf1d7bd735bf-kube-api-access-dpgq6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-r2zvf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.362805 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.406083 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79511b9-ebbe-407a-8b6e-8d41a4930281-secret-volume\") pod \"e79511b9-ebbe-407a-8b6e-8d41a4930281\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.406250 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79511b9-ebbe-407a-8b6e-8d41a4930281-config-volume\") pod \"e79511b9-ebbe-407a-8b6e-8d41a4930281\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.406310 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr9nd\" (UniqueName: \"kubernetes.io/projected/e79511b9-ebbe-407a-8b6e-8d41a4930281-kube-api-access-gr9nd\") pod \"e79511b9-ebbe-407a-8b6e-8d41a4930281\" (UID: \"e79511b9-ebbe-407a-8b6e-8d41a4930281\") " Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.407209 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79511b9-ebbe-407a-8b6e-8d41a4930281-config-volume" (OuterVolumeSpecName: "config-volume") pod "e79511b9-ebbe-407a-8b6e-8d41a4930281" (UID: "e79511b9-ebbe-407a-8b6e-8d41a4930281"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.409305 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79511b9-ebbe-407a-8b6e-8d41a4930281-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e79511b9-ebbe-407a-8b6e-8d41a4930281" (UID: "e79511b9-ebbe-407a-8b6e-8d41a4930281"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.411025 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79511b9-ebbe-407a-8b6e-8d41a4930281-kube-api-access-gr9nd" (OuterVolumeSpecName: "kube-api-access-gr9nd") pod "e79511b9-ebbe-407a-8b6e-8d41a4930281" (UID: "e79511b9-ebbe-407a-8b6e-8d41a4930281"). InnerVolumeSpecName "kube-api-access-gr9nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.422972 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.508669 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79511b9-ebbe-407a-8b6e-8d41a4930281-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.508718 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79511b9-ebbe-407a-8b6e-8d41a4930281-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:04 crc kubenswrapper[4793]: I0217 21:15:04.508731 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr9nd\" (UniqueName: \"kubernetes.io/projected/e79511b9-ebbe-407a-8b6e-8d41a4930281-kube-api-access-gr9nd\") on node \"crc\" DevicePath \"\"" Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:04.978708 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf"] Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.006227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" event={"ID":"2b16e893-10e7-4207-9ace-bf1d7bd735bf","Type":"ContainerStarted","Data":"1f254254491f6f0e235df4f2c50b4205fe6c9aed7dea0bdc3f86fb34a2d8db9d"} Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.015073 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" event={"ID":"e79511b9-ebbe-407a-8b6e-8d41a4930281","Type":"ContainerDied","Data":"08706d72ce2ab365140ebd3a7f64678b214bb6f9577eeff36b3d491aec5fa9b0"} Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.015132 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08706d72ce2ab365140ebd3a7f64678b214bb6f9577eeff36b3d491aec5fa9b0" Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.015165 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5" Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.451966 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m"] Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.460244 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522670-qld8m"] Feb 17 21:15:05 crc kubenswrapper[4793]: I0217 21:15:05.555341 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b6d4ed-8f06-4ae1-aea1-772d90636d7e" path="/var/lib/kubelet/pods/50b6d4ed-8f06-4ae1-aea1-772d90636d7e/volumes" Feb 17 21:15:06 crc kubenswrapper[4793]: I0217 21:15:06.027555 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" event={"ID":"2b16e893-10e7-4207-9ace-bf1d7bd735bf","Type":"ContainerStarted","Data":"753c6bec33b412b3b396e2bfa5821c501284be89336384472e13c187dd501f1d"} Feb 17 21:15:06 crc kubenswrapper[4793]: I0217 21:15:06.059857 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" podStartSLOduration=1.574674597 podStartE2EDuration="2.059820482s" podCreationTimestamp="2026-02-17 21:15:04 +0000 UTC" firstStartedPulling="2026-02-17 21:15:04.98336902 +0000 UTC m=+3980.275067361" lastFinishedPulling="2026-02-17 21:15:05.468514935 +0000 UTC m=+3980.760213246" observedRunningTime="2026-02-17 21:15:06.042495526 +0000 UTC m=+3981.334193837" watchObservedRunningTime="2026-02-17 21:15:06.059820482 +0000 UTC m=+3981.351518933" Feb 17 21:15:12 crc kubenswrapper[4793]: I0217 21:15:12.539270 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:15:12 crc kubenswrapper[4793]: E0217 21:15:12.540558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:15:20 crc kubenswrapper[4793]: I0217 21:15:20.102039 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:15:20 crc kubenswrapper[4793]: I0217 21:15:20.102799 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:15:24 crc kubenswrapper[4793]: I0217 21:15:24.323815 4793 scope.go:117] "RemoveContainer" containerID="e8eca17f02b52d6d94c88aa39c4ea84b14fbb91595dc3be70ff5e16a1651f79c" Feb 17 21:15:25 crc kubenswrapper[4793]: I0217 21:15:25.553360 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:15:26 crc kubenswrapper[4793]: I0217 21:15:26.580024 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1"} Feb 17 21:15:28 crc kubenswrapper[4793]: I0217 21:15:28.603481 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" exitCode=1 Feb 17 21:15:28 crc kubenswrapper[4793]: I0217 21:15:28.604802 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1"} Feb 17 21:15:28 crc kubenswrapper[4793]: I0217 21:15:28.604889 4793 scope.go:117] "RemoveContainer" containerID="5fc3b8a993a7a358eb27c63823e446b64aff5986fcb5fa9f8e3e52d87809547e" Feb 17 21:15:28 crc kubenswrapper[4793]: I0217 21:15:28.605977 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:15:28 crc kubenswrapper[4793]: E0217 21:15:28.606525 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:15:30 crc kubenswrapper[4793]: I0217 21:15:30.596029 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:15:30 crc kubenswrapper[4793]: I0217 21:15:30.597423 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:15:30 crc kubenswrapper[4793]: E0217 21:15:30.597838 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:15:35 crc kubenswrapper[4793]: I0217 21:15:35.596802 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:15:35 crc kubenswrapper[4793]: I0217 21:15:35.597444 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:15:35 crc kubenswrapper[4793]: I0217 21:15:35.597458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:15:35 crc kubenswrapper[4793]: I0217 21:15:35.598286 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:15:35 crc kubenswrapper[4793]: E0217 21:15:35.598561 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:15:47 crc kubenswrapper[4793]: I0217 21:15:47.538530 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:15:47 crc kubenswrapper[4793]: E0217 21:15:47.539334 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:15:50 crc kubenswrapper[4793]: I0217 21:15:50.102349 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:15:50 crc kubenswrapper[4793]: I0217 21:15:50.102777 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:16:01 crc kubenswrapper[4793]: I0217 21:16:01.539241 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:16:01 crc kubenswrapper[4793]: E0217 21:16:01.540748 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:16:14 crc kubenswrapper[4793]: I0217 21:16:14.539036 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:16:14 crc kubenswrapper[4793]: E0217 21:16:14.539778 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:16:20 crc kubenswrapper[4793]: I0217 21:16:20.102312 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:16:20 crc kubenswrapper[4793]: I0217 21:16:20.102856 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:16:20 crc kubenswrapper[4793]: I0217 21:16:20.102899 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:16:20 crc kubenswrapper[4793]: I0217 21:16:20.103587 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08bd50edbae004fbadbb4be27d654951e19a5a66ae5db214f1828f62d5108ea6"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:16:20 crc kubenswrapper[4793]: I0217 21:16:20.103648 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://08bd50edbae004fbadbb4be27d654951e19a5a66ae5db214f1828f62d5108ea6" gracePeriod=600 Feb 17 21:16:21 crc kubenswrapper[4793]: I0217 21:16:21.120379 4793 generic.go:334] "Generic (PLEG): container finished" podID="2b16e893-10e7-4207-9ace-bf1d7bd735bf" containerID="753c6bec33b412b3b396e2bfa5821c501284be89336384472e13c187dd501f1d" exitCode=0 Feb 17 21:16:21 crc kubenswrapper[4793]: I0217 21:16:21.120465 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" event={"ID":"2b16e893-10e7-4207-9ace-bf1d7bd735bf","Type":"ContainerDied","Data":"753c6bec33b412b3b396e2bfa5821c501284be89336384472e13c187dd501f1d"} Feb 17 21:16:21 crc kubenswrapper[4793]: I0217 21:16:21.124064 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="08bd50edbae004fbadbb4be27d654951e19a5a66ae5db214f1828f62d5108ea6" exitCode=0 Feb 17 21:16:21 crc kubenswrapper[4793]: I0217 21:16:21.124100 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"08bd50edbae004fbadbb4be27d654951e19a5a66ae5db214f1828f62d5108ea6"} Feb 17 21:16:21 crc kubenswrapper[4793]: I0217 21:16:21.124160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6"} Feb 17 21:16:21 crc kubenswrapper[4793]: I0217 21:16:21.124183 4793 scope.go:117] "RemoveContainer" containerID="7796641c695371650a96e90298de39c311ff235164cc763f3489adc1bdb31328" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.571991 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.637525 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-inventory\") pod \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.637574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ssh-key-openstack-edpm-ipam\") pod \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.637649 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovn-combined-ca-bundle\") pod \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.637727 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpgq6\" (UniqueName: \"kubernetes.io/projected/2b16e893-10e7-4207-9ace-bf1d7bd735bf-kube-api-access-dpgq6\") pod \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.637786 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovncontroller-config-0\") pod \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\" (UID: \"2b16e893-10e7-4207-9ace-bf1d7bd735bf\") " Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.646278 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2b16e893-10e7-4207-9ace-bf1d7bd735bf" (UID: "2b16e893-10e7-4207-9ace-bf1d7bd735bf"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.650922 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b16e893-10e7-4207-9ace-bf1d7bd735bf-kube-api-access-dpgq6" (OuterVolumeSpecName: "kube-api-access-dpgq6") pod "2b16e893-10e7-4207-9ace-bf1d7bd735bf" (UID: "2b16e893-10e7-4207-9ace-bf1d7bd735bf"). InnerVolumeSpecName "kube-api-access-dpgq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.668972 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-inventory" (OuterVolumeSpecName: "inventory") pod "2b16e893-10e7-4207-9ace-bf1d7bd735bf" (UID: "2b16e893-10e7-4207-9ace-bf1d7bd735bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.669914 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b16e893-10e7-4207-9ace-bf1d7bd735bf" (UID: "2b16e893-10e7-4207-9ace-bf1d7bd735bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.671246 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "2b16e893-10e7-4207-9ace-bf1d7bd735bf" (UID: "2b16e893-10e7-4207-9ace-bf1d7bd735bf"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.741055 4793 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.741111 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpgq6\" (UniqueName: \"kubernetes.io/projected/2b16e893-10e7-4207-9ace-bf1d7bd735bf-kube-api-access-dpgq6\") on node \"crc\" DevicePath \"\"" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.741133 4793 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.741155 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:16:22 crc kubenswrapper[4793]: I0217 21:16:22.741175 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b16e893-10e7-4207-9ace-bf1d7bd735bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.153621 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" event={"ID":"2b16e893-10e7-4207-9ace-bf1d7bd735bf","Type":"ContainerDied","Data":"1f254254491f6f0e235df4f2c50b4205fe6c9aed7dea0bdc3f86fb34a2d8db9d"} Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.153663 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f254254491f6f0e235df4f2c50b4205fe6c9aed7dea0bdc3f86fb34a2d8db9d" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.153782 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-r2zvf" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.443811 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7"] Feb 17 21:16:23 crc kubenswrapper[4793]: E0217 21:16:23.444250 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79511b9-ebbe-407a-8b6e-8d41a4930281" containerName="collect-profiles" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.444270 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79511b9-ebbe-407a-8b6e-8d41a4930281" containerName="collect-profiles" Feb 17 21:16:23 crc kubenswrapper[4793]: E0217 21:16:23.444317 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b16e893-10e7-4207-9ace-bf1d7bd735bf" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.444328 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b16e893-10e7-4207-9ace-bf1d7bd735bf" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.444522 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79511b9-ebbe-407a-8b6e-8d41a4930281" containerName="collect-profiles" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.444545 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b16e893-10e7-4207-9ace-bf1d7bd735bf" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.445285 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.447912 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.447972 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.448257 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.451131 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.455151 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.455354 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.457537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.457862 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.457904 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.458087 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.458194 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbbjn\" (UniqueName: \"kubernetes.io/projected/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-kube-api-access-wbbjn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.458274 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.458914 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7"] Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.560028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.560111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbbjn\" (UniqueName: \"kubernetes.io/projected/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-kube-api-access-wbbjn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.560151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.560240 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.560290 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.560315 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.565832 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.566624 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.567915 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.567975 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.572633 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.581169 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbbjn\" (UniqueName: \"kubernetes.io/projected/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-kube-api-access-wbbjn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:23 crc kubenswrapper[4793]: I0217 21:16:23.763088 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:16:25 crc kubenswrapper[4793]: I0217 21:16:25.344240 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7"] Feb 17 21:16:25 crc kubenswrapper[4793]: W0217 21:16:25.352907 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f3b5fc_f9bf_4997_8bfe_51df7585da2d.slice/crio-87de09c113a2993d1c1e146c33f239a6f66f99df9835bb55884c02b0eb0be056 WatchSource:0}: Error finding container 87de09c113a2993d1c1e146c33f239a6f66f99df9835bb55884c02b0eb0be056: Status 404 returned error can't find the container with id 87de09c113a2993d1c1e146c33f239a6f66f99df9835bb55884c02b0eb0be056 Feb 17 21:16:26 crc kubenswrapper[4793]: I0217 21:16:26.185393 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" event={"ID":"14f3b5fc-f9bf-4997-8bfe-51df7585da2d","Type":"ContainerStarted","Data":"3cf936517a3b9c038174ad56c7a786900db7fab800dd0a2f89e799699c89ee75"} Feb 17 21:16:26 crc kubenswrapper[4793]: I0217 21:16:26.186227 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" event={"ID":"14f3b5fc-f9bf-4997-8bfe-51df7585da2d","Type":"ContainerStarted","Data":"87de09c113a2993d1c1e146c33f239a6f66f99df9835bb55884c02b0eb0be056"} Feb 17 21:16:26 crc kubenswrapper[4793]: I0217 21:16:26.226438 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" podStartSLOduration=2.7410981039999998 podStartE2EDuration="3.226417432s" podCreationTimestamp="2026-02-17 21:16:23 +0000 UTC" firstStartedPulling="2026-02-17 21:16:25.356892062 +0000 UTC m=+4060.648590373" lastFinishedPulling="2026-02-17 21:16:25.84221136 +0000 UTC m=+4061.133909701" observedRunningTime="2026-02-17 21:16:26.224653489 +0000 UTC m=+4061.516351840" watchObservedRunningTime="2026-02-17 21:16:26.226417432 +0000 UTC m=+4061.518115743" Feb 17 21:16:29 crc kubenswrapper[4793]: I0217 21:16:29.539233 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:16:29 crc kubenswrapper[4793]: E0217 21:16:29.540071 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:16:42 crc kubenswrapper[4793]: I0217 21:16:42.539575 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:16:42 crc kubenswrapper[4793]: E0217 21:16:42.540647 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:16:56 crc kubenswrapper[4793]: I0217 21:16:56.539359 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:16:56 crc kubenswrapper[4793]: E0217 21:16:56.540517 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:17:10 crc kubenswrapper[4793]: I0217 21:17:10.538268 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:17:10 crc kubenswrapper[4793]: E0217 21:17:10.539033 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:17:23 crc kubenswrapper[4793]: I0217 21:17:23.541000 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:17:23 crc kubenswrapper[4793]: E0217 21:17:23.542455 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:17:23 crc kubenswrapper[4793]: I0217 21:17:23.846181 4793 generic.go:334] "Generic (PLEG): container finished" podID="14f3b5fc-f9bf-4997-8bfe-51df7585da2d" containerID="3cf936517a3b9c038174ad56c7a786900db7fab800dd0a2f89e799699c89ee75" exitCode=0 Feb 17 21:17:23 crc kubenswrapper[4793]: I0217 21:17:23.846282 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" event={"ID":"14f3b5fc-f9bf-4997-8bfe-51df7585da2d","Type":"ContainerDied","Data":"3cf936517a3b9c038174ad56c7a786900db7fab800dd0a2f89e799699c89ee75"} Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.442715 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.597815 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-metadata-combined-ca-bundle\") pod \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.597989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbbjn\" (UniqueName: \"kubernetes.io/projected/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-kube-api-access-wbbjn\") pod \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.598045 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-nova-metadata-neutron-config-0\") pod \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.598085 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.598266 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-inventory\") pod \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.598301 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-ssh-key-openstack-edpm-ipam\") pod \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\" (UID: \"14f3b5fc-f9bf-4997-8bfe-51df7585da2d\") " Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.603629 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "14f3b5fc-f9bf-4997-8bfe-51df7585da2d" (UID: "14f3b5fc-f9bf-4997-8bfe-51df7585da2d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.604051 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-kube-api-access-wbbjn" (OuterVolumeSpecName: "kube-api-access-wbbjn") pod "14f3b5fc-f9bf-4997-8bfe-51df7585da2d" (UID: "14f3b5fc-f9bf-4997-8bfe-51df7585da2d"). InnerVolumeSpecName "kube-api-access-wbbjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.632448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-inventory" (OuterVolumeSpecName: "inventory") pod "14f3b5fc-f9bf-4997-8bfe-51df7585da2d" (UID: "14f3b5fc-f9bf-4997-8bfe-51df7585da2d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.632504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "14f3b5fc-f9bf-4997-8bfe-51df7585da2d" (UID: "14f3b5fc-f9bf-4997-8bfe-51df7585da2d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.632954 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14f3b5fc-f9bf-4997-8bfe-51df7585da2d" (UID: "14f3b5fc-f9bf-4997-8bfe-51df7585da2d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.651493 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "14f3b5fc-f9bf-4997-8bfe-51df7585da2d" (UID: "14f3b5fc-f9bf-4997-8bfe-51df7585da2d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.700267 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.700306 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.700322 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.700338 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbbjn\" (UniqueName: \"kubernetes.io/projected/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-kube-api-access-wbbjn\") on node \"crc\" DevicePath \"\"" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.700357 4793 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.700374 4793 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/14f3b5fc-f9bf-4997-8bfe-51df7585da2d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.874665 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" event={"ID":"14f3b5fc-f9bf-4997-8bfe-51df7585da2d","Type":"ContainerDied","Data":"87de09c113a2993d1c1e146c33f239a6f66f99df9835bb55884c02b0eb0be056"} Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.875124 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87de09c113a2993d1c1e146c33f239a6f66f99df9835bb55884c02b0eb0be056" Feb 17 21:17:25 crc kubenswrapper[4793]: I0217 21:17:25.874826 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.024413 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm"] Feb 17 21:17:26 crc kubenswrapper[4793]: E0217 21:17:26.025351 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f3b5fc-f9bf-4997-8bfe-51df7585da2d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.025404 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f3b5fc-f9bf-4997-8bfe-51df7585da2d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.025959 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f3b5fc-f9bf-4997-8bfe-51df7585da2d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.027560 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.030485 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.032772 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.032893 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.035411 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.035571 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.042556 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm"] Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.212772 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jx4\" (UniqueName: \"kubernetes.io/projected/46d312cc-dda9-4d5e-bea8-1559405ca6b9-kube-api-access-c8jx4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.212996 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.213081 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.213317 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.213569 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.315901 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jx4\" (UniqueName: \"kubernetes.io/projected/46d312cc-dda9-4d5e-bea8-1559405ca6b9-kube-api-access-c8jx4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.316360 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.316405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.316515 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.316684 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.322807 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.323231 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.323464 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.329499 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.348154 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jx4\" (UniqueName: \"kubernetes.io/projected/46d312cc-dda9-4d5e-bea8-1559405ca6b9-kube-api-access-c8jx4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l9shm\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.366145 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.776546 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm"] Feb 17 21:17:26 crc kubenswrapper[4793]: W0217 21:17:26.777610 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46d312cc_dda9_4d5e_bea8_1559405ca6b9.slice/crio-e009e3361cd9c1a39f5a4cbb9b3268fcdbaeceb12d04fa53164155fad7eb996f WatchSource:0}: Error finding container e009e3361cd9c1a39f5a4cbb9b3268fcdbaeceb12d04fa53164155fad7eb996f: Status 404 returned error can't find the container with id e009e3361cd9c1a39f5a4cbb9b3268fcdbaeceb12d04fa53164155fad7eb996f Feb 17 21:17:26 crc kubenswrapper[4793]: I0217 21:17:26.889657 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" event={"ID":"46d312cc-dda9-4d5e-bea8-1559405ca6b9","Type":"ContainerStarted","Data":"e009e3361cd9c1a39f5a4cbb9b3268fcdbaeceb12d04fa53164155fad7eb996f"} Feb 17 21:17:27 crc kubenswrapper[4793]: I0217 21:17:27.902424 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" event={"ID":"46d312cc-dda9-4d5e-bea8-1559405ca6b9","Type":"ContainerStarted","Data":"9be37e419ba47ae9dd2c095d5fd0e73da02be05f61b9f2dba3e81d965064595f"} Feb 17 21:17:27 crc kubenswrapper[4793]: I0217 21:17:27.925772 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" podStartSLOduration=2.478878763 podStartE2EDuration="2.925750577s" podCreationTimestamp="2026-02-17 21:17:25 +0000 UTC" firstStartedPulling="2026-02-17 21:17:26.780791399 +0000 UTC m=+4122.072489760" lastFinishedPulling="2026-02-17 21:17:27.227663223 +0000 UTC m=+4122.519361574" observedRunningTime="2026-02-17 21:17:27.918805636 +0000 UTC m=+4123.210503957" watchObservedRunningTime="2026-02-17 21:17:27.925750577 +0000 UTC m=+4123.217448898" Feb 17 21:17:36 crc kubenswrapper[4793]: I0217 21:17:36.538393 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:17:36 crc kubenswrapper[4793]: E0217 21:17:36.539256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:17:50 crc kubenswrapper[4793]: I0217 21:17:50.539417 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:17:50 crc kubenswrapper[4793]: E0217 21:17:50.540635 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:18:02 crc kubenswrapper[4793]: I0217 21:18:02.539780 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:18:02 crc kubenswrapper[4793]: E0217 21:18:02.541398 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:18:15 crc kubenswrapper[4793]: I0217 21:18:15.545024 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:18:15 crc kubenswrapper[4793]: E0217 21:18:15.545880 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:18:20 crc kubenswrapper[4793]: I0217 21:18:20.102309 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:18:20 crc kubenswrapper[4793]: I0217 21:18:20.103236 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:18:30 crc kubenswrapper[4793]: I0217 21:18:30.539028 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:18:30 crc kubenswrapper[4793]: E0217 21:18:30.540050 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:18:40 crc kubenswrapper[4793]: I0217 21:18:40.455851 4793 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-jqgbz container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.39:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 21:18:40 crc kubenswrapper[4793]: I0217 21:18:40.456741 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" podUID="b19f5d08-b87f-4168-b29b-b28619987367" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.39:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 21:18:40 crc kubenswrapper[4793]: I0217 21:18:40.476933 4793 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-jqgbz container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.39:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 21:18:40 crc kubenswrapper[4793]: I0217 21:18:40.476988 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-jqgbz" podUID="b19f5d08-b87f-4168-b29b-b28619987367" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.39:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 21:18:43 crc kubenswrapper[4793]: I0217 21:18:43.538935 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:18:43 crc kubenswrapper[4793]: E0217 21:18:43.540074 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:18:50 crc kubenswrapper[4793]: I0217 21:18:50.102056 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:18:50 crc kubenswrapper[4793]: I0217 21:18:50.102775 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:18:57 crc kubenswrapper[4793]: I0217 21:18:57.538923 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:18:57 crc kubenswrapper[4793]: E0217 21:18:57.541970 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:19:10 crc kubenswrapper[4793]: I0217 21:19:10.538975 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:19:10 crc kubenswrapper[4793]: E0217 21:19:10.548204 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.102393 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.103054 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.103117 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.104189 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.104278 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" gracePeriod=600 Feb 17 21:19:20 crc kubenswrapper[4793]: E0217 21:19:20.794501 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.924787 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" exitCode=0 Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.924828 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6"} Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.924882 4793 scope.go:117] "RemoveContainer" containerID="08bd50edbae004fbadbb4be27d654951e19a5a66ae5db214f1828f62d5108ea6" Feb 17 21:19:20 crc kubenswrapper[4793]: I0217 21:19:20.926071 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:19:20 crc kubenswrapper[4793]: E0217 21:19:20.929712 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:19:24 crc kubenswrapper[4793]: I0217 21:19:24.539386 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:19:24 crc kubenswrapper[4793]: E0217 21:19:24.540188 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:19:35 crc kubenswrapper[4793]: I0217 21:19:35.996323 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n75ll"] Feb 17 21:19:35 crc kubenswrapper[4793]: I0217 21:19:35.999601 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.026348 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n75ll"] Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.169878 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-utilities\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.169991 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4k8\" (UniqueName: \"kubernetes.io/projected/335081b9-61db-4f50-b301-a587007c72e0-kube-api-access-vb4k8\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.170100 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-catalog-content\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.194757 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpsbb"] Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.196646 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.212908 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpsbb"] Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.271657 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-catalog-content\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.271722 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-utilities\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.271935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4k8\" (UniqueName: \"kubernetes.io/projected/335081b9-61db-4f50-b301-a587007c72e0-kube-api-access-vb4k8\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.272091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-utilities\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.272094 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmjs\" (UniqueName: \"kubernetes.io/projected/2bd129d2-f974-48ac-b469-d89e78cfe154-kube-api-access-2pmjs\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.272182 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-catalog-content\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.272232 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-utilities\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.272679 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-catalog-content\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.305271 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4k8\" (UniqueName: \"kubernetes.io/projected/335081b9-61db-4f50-b301-a587007c72e0-kube-api-access-vb4k8\") pod \"community-operators-n75ll\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.319983 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.373558 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmjs\" (UniqueName: \"kubernetes.io/projected/2bd129d2-f974-48ac-b469-d89e78cfe154-kube-api-access-2pmjs\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.373920 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-utilities\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.373974 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-catalog-content\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.374402 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-catalog-content\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.374875 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-utilities\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.394618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmjs\" (UniqueName: \"kubernetes.io/projected/2bd129d2-f974-48ac-b469-d89e78cfe154-kube-api-access-2pmjs\") pod \"redhat-operators-vpsbb\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.540454 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.541391 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:19:36 crc kubenswrapper[4793]: E0217 21:19:36.541587 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:19:36 crc kubenswrapper[4793]: I0217 21:19:36.903555 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n75ll"] Feb 17 21:19:36 crc kubenswrapper[4793]: W0217 21:19:36.905618 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod335081b9_61db_4f50_b301_a587007c72e0.slice/crio-dff80539154235f69bab4b30d28740cd4b9af4bdc7fbf7b01c0d39e2e4685fff WatchSource:0}: Error finding container dff80539154235f69bab4b30d28740cd4b9af4bdc7fbf7b01c0d39e2e4685fff: Status 404 returned error can't find the container with id dff80539154235f69bab4b30d28740cd4b9af4bdc7fbf7b01c0d39e2e4685fff Feb 17 21:19:37 crc kubenswrapper[4793]: I0217 21:19:37.073784 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpsbb"] Feb 17 21:19:37 crc kubenswrapper[4793]: W0217 21:19:37.085767 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd129d2_f974_48ac_b469_d89e78cfe154.slice/crio-ce1b5dc3bf9e0debced9a8c5d76c0285d1358db1fd06ac6b4ab50a4e80784360 WatchSource:0}: Error finding container ce1b5dc3bf9e0debced9a8c5d76c0285d1358db1fd06ac6b4ab50a4e80784360: Status 404 returned error can't find the container with id ce1b5dc3bf9e0debced9a8c5d76c0285d1358db1fd06ac6b4ab50a4e80784360 Feb 17 21:19:37 crc kubenswrapper[4793]: I0217 21:19:37.111861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerStarted","Data":"ce1b5dc3bf9e0debced9a8c5d76c0285d1358db1fd06ac6b4ab50a4e80784360"} Feb 17 21:19:37 crc kubenswrapper[4793]: I0217 21:19:37.113601 4793 generic.go:334] "Generic (PLEG): container finished" podID="335081b9-61db-4f50-b301-a587007c72e0" containerID="8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe" exitCode=0 Feb 17 21:19:37 crc kubenswrapper[4793]: I0217 21:19:37.113653 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerDied","Data":"8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe"} Feb 17 21:19:37 crc kubenswrapper[4793]: I0217 21:19:37.113713 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerStarted","Data":"dff80539154235f69bab4b30d28740cd4b9af4bdc7fbf7b01c0d39e2e4685fff"} Feb 17 21:19:37 crc kubenswrapper[4793]: I0217 21:19:37.115937 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:19:38 crc kubenswrapper[4793]: I0217 21:19:38.132318 4793 generic.go:334] "Generic (PLEG): container finished" podID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerID="d71f314e5ced6fe478f28f246a37ff8561e41e770e162ead6868ffcf79fd0ed6" exitCode=0 Feb 17 21:19:38 crc kubenswrapper[4793]: I0217 21:19:38.132762 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerDied","Data":"d71f314e5ced6fe478f28f246a37ff8561e41e770e162ead6868ffcf79fd0ed6"} Feb 17 21:19:38 crc kubenswrapper[4793]: I0217 21:19:38.137591 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerStarted","Data":"fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541"} Feb 17 21:19:38 crc kubenswrapper[4793]: I0217 21:19:38.538962 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:19:38 crc kubenswrapper[4793]: E0217 21:19:38.539264 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.147863 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerStarted","Data":"11703e7609fdc0be2cd69aa0022816307a4f76e89b5cfec65f7900f875a071c4"} Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.149861 4793 generic.go:334] "Generic (PLEG): container finished" podID="335081b9-61db-4f50-b301-a587007c72e0" containerID="fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541" exitCode=0 Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.149903 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerDied","Data":"fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541"} Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.196119 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5mqmr"] Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.198841 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.228595 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mqmr"] Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.330931 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8062b712-7bb0-4d33-9b1d-ca342eb7971f-utilities\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.331093 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8062b712-7bb0-4d33-9b1d-ca342eb7971f-catalog-content\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.331135 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8mm\" (UniqueName: \"kubernetes.io/projected/8062b712-7bb0-4d33-9b1d-ca342eb7971f-kube-api-access-bn8mm\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.432958 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8062b712-7bb0-4d33-9b1d-ca342eb7971f-catalog-content\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.433008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn8mm\" (UniqueName: \"kubernetes.io/projected/8062b712-7bb0-4d33-9b1d-ca342eb7971f-kube-api-access-bn8mm\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.433111 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8062b712-7bb0-4d33-9b1d-ca342eb7971f-utilities\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.433598 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8062b712-7bb0-4d33-9b1d-ca342eb7971f-catalog-content\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.434823 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8062b712-7bb0-4d33-9b1d-ca342eb7971f-utilities\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.452346 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn8mm\" (UniqueName: \"kubernetes.io/projected/8062b712-7bb0-4d33-9b1d-ca342eb7971f-kube-api-access-bn8mm\") pod \"certified-operators-5mqmr\" (UID: \"8062b712-7bb0-4d33-9b1d-ca342eb7971f\") " pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:39 crc kubenswrapper[4793]: I0217 21:19:39.522411 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:40 crc kubenswrapper[4793]: I0217 21:19:40.061728 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mqmr"] Feb 17 21:19:40 crc kubenswrapper[4793]: I0217 21:19:40.159087 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mqmr" event={"ID":"8062b712-7bb0-4d33-9b1d-ca342eb7971f","Type":"ContainerStarted","Data":"c854543e901ba29e96420fdf75f669d8a8e9c1c394478c54bcb20d5d9e48dfcf"} Feb 17 21:19:40 crc kubenswrapper[4793]: I0217 21:19:40.162028 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerStarted","Data":"5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b"} Feb 17 21:19:40 crc kubenswrapper[4793]: I0217 21:19:40.190105 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n75ll" podStartSLOduration=2.7227188780000002 podStartE2EDuration="5.190081846s" podCreationTimestamp="2026-02-17 21:19:35 +0000 UTC" firstStartedPulling="2026-02-17 21:19:37.115625482 +0000 UTC m=+4252.407323803" lastFinishedPulling="2026-02-17 21:19:39.58298846 +0000 UTC m=+4254.874686771" observedRunningTime="2026-02-17 21:19:40.176495921 +0000 UTC m=+4255.468194242" watchObservedRunningTime="2026-02-17 21:19:40.190081846 +0000 UTC m=+4255.481780157" Feb 17 21:19:41 crc kubenswrapper[4793]: I0217 21:19:41.174159 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mqmr" event={"ID":"8062b712-7bb0-4d33-9b1d-ca342eb7971f","Type":"ContainerStarted","Data":"0f47b7fd57e94e9084b153fff3fbfe5d2be9426bcedc09c9f68bef06e02512e4"} Feb 17 21:19:42 crc kubenswrapper[4793]: I0217 21:19:42.187799 4793 generic.go:334] "Generic (PLEG): container finished" podID="8062b712-7bb0-4d33-9b1d-ca342eb7971f" containerID="0f47b7fd57e94e9084b153fff3fbfe5d2be9426bcedc09c9f68bef06e02512e4" exitCode=0 Feb 17 21:19:42 crc kubenswrapper[4793]: I0217 21:19:42.187874 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mqmr" event={"ID":"8062b712-7bb0-4d33-9b1d-ca342eb7971f","Type":"ContainerDied","Data":"0f47b7fd57e94e9084b153fff3fbfe5d2be9426bcedc09c9f68bef06e02512e4"} Feb 17 21:19:44 crc kubenswrapper[4793]: I0217 21:19:44.212404 4793 generic.go:334] "Generic (PLEG): container finished" podID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerID="11703e7609fdc0be2cd69aa0022816307a4f76e89b5cfec65f7900f875a071c4" exitCode=0 Feb 17 21:19:44 crc kubenswrapper[4793]: I0217 21:19:44.212514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerDied","Data":"11703e7609fdc0be2cd69aa0022816307a4f76e89b5cfec65f7900f875a071c4"} Feb 17 21:19:45 crc kubenswrapper[4793]: I0217 21:19:45.227196 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerStarted","Data":"93ce56797b5467c19088036541326104bd3b8cbda2b0a167e504cdc066464af0"} Feb 17 21:19:45 crc kubenswrapper[4793]: I0217 21:19:45.260323 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpsbb" podStartSLOduration=2.783874902 podStartE2EDuration="9.260301037s" podCreationTimestamp="2026-02-17 21:19:36 +0000 UTC" firstStartedPulling="2026-02-17 21:19:38.135063031 +0000 UTC m=+4253.426761342" lastFinishedPulling="2026-02-17 21:19:44.611489166 +0000 UTC m=+4259.903187477" observedRunningTime="2026-02-17 21:19:45.256952675 +0000 UTC m=+4260.548650996" watchObservedRunningTime="2026-02-17 21:19:45.260301037 +0000 UTC m=+4260.551999348" Feb 17 21:19:46 crc kubenswrapper[4793]: I0217 21:19:46.320465 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:46 crc kubenswrapper[4793]: I0217 21:19:46.320761 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:46 crc kubenswrapper[4793]: I0217 21:19:46.375252 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:46 crc kubenswrapper[4793]: I0217 21:19:46.541009 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:46 crc kubenswrapper[4793]: I0217 21:19:46.541063 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:19:47 crc kubenswrapper[4793]: I0217 21:19:47.300082 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:47 crc kubenswrapper[4793]: I0217 21:19:47.598845 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpsbb" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" probeResult="failure" output=< Feb 17 21:19:47 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:19:47 crc kubenswrapper[4793]: > Feb 17 21:19:47 crc kubenswrapper[4793]: I0217 21:19:47.984451 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n75ll"] Feb 17 21:19:48 crc kubenswrapper[4793]: I0217 21:19:48.258201 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mqmr" event={"ID":"8062b712-7bb0-4d33-9b1d-ca342eb7971f","Type":"ContainerStarted","Data":"552f95a265ceb7da4d4f9957b3fe2cf2ab4754c7f266ea78ef31120ac70bff65"} Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.267939 4793 generic.go:334] "Generic (PLEG): container finished" podID="8062b712-7bb0-4d33-9b1d-ca342eb7971f" containerID="552f95a265ceb7da4d4f9957b3fe2cf2ab4754c7f266ea78ef31120ac70bff65" exitCode=0 Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.268048 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mqmr" event={"ID":"8062b712-7bb0-4d33-9b1d-ca342eb7971f","Type":"ContainerDied","Data":"552f95a265ceb7da4d4f9957b3fe2cf2ab4754c7f266ea78ef31120ac70bff65"} Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.269638 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n75ll" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="registry-server" containerID="cri-o://5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b" gracePeriod=2 Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.859839 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.948460 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4k8\" (UniqueName: \"kubernetes.io/projected/335081b9-61db-4f50-b301-a587007c72e0-kube-api-access-vb4k8\") pod \"335081b9-61db-4f50-b301-a587007c72e0\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.948614 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-catalog-content\") pod \"335081b9-61db-4f50-b301-a587007c72e0\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.948774 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-utilities\") pod \"335081b9-61db-4f50-b301-a587007c72e0\" (UID: \"335081b9-61db-4f50-b301-a587007c72e0\") " Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.949702 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-utilities" (OuterVolumeSpecName: "utilities") pod "335081b9-61db-4f50-b301-a587007c72e0" (UID: "335081b9-61db-4f50-b301-a587007c72e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:19:49 crc kubenswrapper[4793]: I0217 21:19:49.970069 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335081b9-61db-4f50-b301-a587007c72e0-kube-api-access-vb4k8" (OuterVolumeSpecName: "kube-api-access-vb4k8") pod "335081b9-61db-4f50-b301-a587007c72e0" (UID: "335081b9-61db-4f50-b301-a587007c72e0"). InnerVolumeSpecName "kube-api-access-vb4k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.020599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "335081b9-61db-4f50-b301-a587007c72e0" (UID: "335081b9-61db-4f50-b301-a587007c72e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.051837 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4k8\" (UniqueName: \"kubernetes.io/projected/335081b9-61db-4f50-b301-a587007c72e0-kube-api-access-vb4k8\") on node \"crc\" DevicePath \"\"" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.051869 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.051878 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/335081b9-61db-4f50-b301-a587007c72e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.280872 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mqmr" event={"ID":"8062b712-7bb0-4d33-9b1d-ca342eb7971f","Type":"ContainerStarted","Data":"09ce9b40511a4176b5e7fe5ec944806e22ac4ffba0a3844e68f86502d668c831"} Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.283518 4793 generic.go:334] "Generic (PLEG): container finished" podID="335081b9-61db-4f50-b301-a587007c72e0" containerID="5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b" exitCode=0 Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.283578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerDied","Data":"5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b"} Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.283631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n75ll" event={"ID":"335081b9-61db-4f50-b301-a587007c72e0","Type":"ContainerDied","Data":"dff80539154235f69bab4b30d28740cd4b9af4bdc7fbf7b01c0d39e2e4685fff"} Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.283652 4793 scope.go:117] "RemoveContainer" containerID="5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.284250 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n75ll" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.314872 4793 scope.go:117] "RemoveContainer" containerID="fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.336127 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5mqmr" podStartSLOduration=3.812396257 podStartE2EDuration="11.336099665s" podCreationTimestamp="2026-02-17 21:19:39 +0000 UTC" firstStartedPulling="2026-02-17 21:19:42.190122409 +0000 UTC m=+4257.481820720" lastFinishedPulling="2026-02-17 21:19:49.713825817 +0000 UTC m=+4265.005524128" observedRunningTime="2026-02-17 21:19:50.300974751 +0000 UTC m=+4265.592673152" watchObservedRunningTime="2026-02-17 21:19:50.336099665 +0000 UTC m=+4265.627798016" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.349045 4793 scope.go:117] "RemoveContainer" containerID="8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.349201 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n75ll"] Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.359249 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n75ll"] Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.386958 4793 scope.go:117] "RemoveContainer" containerID="5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b" Feb 17 21:19:50 crc kubenswrapper[4793]: E0217 21:19:50.387398 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b\": container with ID starting with 5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b not found: ID does not exist" containerID="5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.387441 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b"} err="failed to get container status \"5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b\": rpc error: code = NotFound desc = could not find container \"5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b\": container with ID starting with 5f16c8f184c06b45460636a27b54e42ab4d8f63cfecede7d739935ec66cd7d9b not found: ID does not exist" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.387467 4793 scope.go:117] "RemoveContainer" containerID="fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541" Feb 17 21:19:50 crc kubenswrapper[4793]: E0217 21:19:50.387800 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541\": container with ID starting with fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541 not found: ID does not exist" containerID="fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.387834 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541"} err="failed to get container status \"fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541\": rpc error: code = NotFound desc = could not find container \"fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541\": container with ID starting with fe7fdf53dc9a4bdae0a3776dfae0a93a076bc6198bd4c5222c99ec7b0a896541 not found: ID does not exist" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.387860 4793 scope.go:117] "RemoveContainer" containerID="8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe" Feb 17 21:19:50 crc kubenswrapper[4793]: E0217 21:19:50.388194 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe\": container with ID starting with 8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe not found: ID does not exist" containerID="8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe" Feb 17 21:19:50 crc kubenswrapper[4793]: I0217 21:19:50.388218 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe"} err="failed to get container status \"8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe\": rpc error: code = NotFound desc = could not find container \"8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe\": container with ID starting with 8be6cbd81b37b6e69917f7adf1433f5737e21569b454258b593a8b858e4580fe not found: ID does not exist" Feb 17 21:19:51 crc kubenswrapper[4793]: I0217 21:19:51.538654 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:19:51 crc kubenswrapper[4793]: E0217 21:19:51.539285 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:19:51 crc kubenswrapper[4793]: I0217 21:19:51.549037 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335081b9-61db-4f50-b301-a587007c72e0" path="/var/lib/kubelet/pods/335081b9-61db-4f50-b301-a587007c72e0/volumes" Feb 17 21:19:52 crc kubenswrapper[4793]: I0217 21:19:52.538779 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:19:52 crc kubenswrapper[4793]: E0217 21:19:52.539149 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:19:57 crc kubenswrapper[4793]: I0217 21:19:57.586476 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpsbb" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" probeResult="failure" output=< Feb 17 21:19:57 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:19:57 crc kubenswrapper[4793]: > Feb 17 21:19:59 crc kubenswrapper[4793]: I0217 21:19:59.523350 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:59 crc kubenswrapper[4793]: I0217 21:19:59.523733 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:19:59 crc kubenswrapper[4793]: I0217 21:19:59.579849 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:20:00 crc kubenswrapper[4793]: I0217 21:20:00.452515 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5mqmr" Feb 17 21:20:01 crc kubenswrapper[4793]: I0217 21:20:01.266380 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mqmr"] Feb 17 21:20:01 crc kubenswrapper[4793]: I0217 21:20:01.615855 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 21:20:01 crc kubenswrapper[4793]: I0217 21:20:01.616147 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8w8zr" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="registry-server" containerID="cri-o://2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b" gracePeriod=2 Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.082549 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.102261 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-catalog-content\") pod \"bd88ef10-9c55-455d-80a9-1d578498f602\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.102477 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp9lf\" (UniqueName: \"kubernetes.io/projected/bd88ef10-9c55-455d-80a9-1d578498f602-kube-api-access-lp9lf\") pod \"bd88ef10-9c55-455d-80a9-1d578498f602\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.102589 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-utilities\") pod \"bd88ef10-9c55-455d-80a9-1d578498f602\" (UID: \"bd88ef10-9c55-455d-80a9-1d578498f602\") " Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.103173 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-utilities" (OuterVolumeSpecName: "utilities") pod "bd88ef10-9c55-455d-80a9-1d578498f602" (UID: "bd88ef10-9c55-455d-80a9-1d578498f602"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.112005 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd88ef10-9c55-455d-80a9-1d578498f602-kube-api-access-lp9lf" (OuterVolumeSpecName: "kube-api-access-lp9lf") pod "bd88ef10-9c55-455d-80a9-1d578498f602" (UID: "bd88ef10-9c55-455d-80a9-1d578498f602"). InnerVolumeSpecName "kube-api-access-lp9lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.168073 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd88ef10-9c55-455d-80a9-1d578498f602" (UID: "bd88ef10-9c55-455d-80a9-1d578498f602"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.205112 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.205149 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd88ef10-9c55-455d-80a9-1d578498f602-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.205162 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp9lf\" (UniqueName: \"kubernetes.io/projected/bd88ef10-9c55-455d-80a9-1d578498f602-kube-api-access-lp9lf\") on node \"crc\" DevicePath \"\"" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.427545 4793 generic.go:334] "Generic (PLEG): container finished" podID="bd88ef10-9c55-455d-80a9-1d578498f602" containerID="2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b" exitCode=0 Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.427599 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w8zr" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.427618 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w8zr" event={"ID":"bd88ef10-9c55-455d-80a9-1d578498f602","Type":"ContainerDied","Data":"2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b"} Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.428538 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w8zr" event={"ID":"bd88ef10-9c55-455d-80a9-1d578498f602","Type":"ContainerDied","Data":"1311cf0cd24de19d27743363a34122fc5afaa1825f461758278c2f4568b79933"} Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.428581 4793 scope.go:117] "RemoveContainer" containerID="2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.449075 4793 scope.go:117] "RemoveContainer" containerID="bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.469621 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.478739 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8w8zr"] Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.493499 4793 scope.go:117] "RemoveContainer" containerID="230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.514952 4793 scope.go:117] "RemoveContainer" containerID="2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b" Feb 17 21:20:02 crc kubenswrapper[4793]: E0217 21:20:02.515420 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b\": container with ID starting with 2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b not found: ID does not exist" containerID="2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.515463 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b"} err="failed to get container status \"2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b\": rpc error: code = NotFound desc = could not find container \"2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b\": container with ID starting with 2d2f6bddccd22289062d880ca26966eade8916f4369944d20267b90170b5935b not found: ID does not exist" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.515493 4793 scope.go:117] "RemoveContainer" containerID="bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc" Feb 17 21:20:02 crc kubenswrapper[4793]: E0217 21:20:02.515842 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc\": container with ID starting with bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc not found: ID does not exist" containerID="bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.515868 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc"} err="failed to get container status \"bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc\": rpc error: code = NotFound desc = could not find container \"bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc\": container with ID starting with bcaa34595893ccba8832e223a2459a9d97bb746210b339db06debd162f4c7fbc not found: ID does not exist" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.515881 4793 scope.go:117] "RemoveContainer" containerID="230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e" Feb 17 21:20:02 crc kubenswrapper[4793]: E0217 21:20:02.516126 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e\": container with ID starting with 230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e not found: ID does not exist" containerID="230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e" Feb 17 21:20:02 crc kubenswrapper[4793]: I0217 21:20:02.516171 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e"} err="failed to get container status \"230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e\": rpc error: code = NotFound desc = could not find container \"230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e\": container with ID starting with 230f571b08523220cb850f5eeecc796b50e0bf0571eedf32d4e84f9723038e6e not found: ID does not exist" Feb 17 21:20:03 crc kubenswrapper[4793]: I0217 21:20:03.549792 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" path="/var/lib/kubelet/pods/bd88ef10-9c55-455d-80a9-1d578498f602/volumes" Feb 17 21:20:05 crc kubenswrapper[4793]: I0217 21:20:05.551464 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:20:05 crc kubenswrapper[4793]: I0217 21:20:05.551576 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:20:05 crc kubenswrapper[4793]: E0217 21:20:05.552034 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:20:05 crc kubenswrapper[4793]: E0217 21:20:05.552046 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:20:07 crc kubenswrapper[4793]: I0217 21:20:07.624080 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpsbb" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" probeResult="failure" output=< Feb 17 21:20:07 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:20:07 crc kubenswrapper[4793]: > Feb 17 21:20:18 crc kubenswrapper[4793]: I0217 21:20:18.123094 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpsbb" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" probeResult="failure" output=< Feb 17 21:20:18 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:20:18 crc kubenswrapper[4793]: > Feb 17 21:20:19 crc kubenswrapper[4793]: I0217 21:20:19.543876 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:20:19 crc kubenswrapper[4793]: E0217 21:20:19.544400 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:20:20 crc kubenswrapper[4793]: I0217 21:20:20.538640 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:20:20 crc kubenswrapper[4793]: E0217 21:20:20.538951 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:20:26 crc kubenswrapper[4793]: I0217 21:20:26.595676 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:20:26 crc kubenswrapper[4793]: I0217 21:20:26.653007 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:20:28 crc kubenswrapper[4793]: I0217 21:20:28.109786 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpsbb"] Feb 17 21:20:28 crc kubenswrapper[4793]: I0217 21:20:28.111451 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpsbb" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" containerID="cri-o://93ce56797b5467c19088036541326104bd3b8cbda2b0a167e504cdc066464af0" gracePeriod=2 Feb 17 21:20:28 crc kubenswrapper[4793]: I0217 21:20:28.716808 4793 generic.go:334] "Generic (PLEG): container finished" podID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerID="93ce56797b5467c19088036541326104bd3b8cbda2b0a167e504cdc066464af0" exitCode=0 Feb 17 21:20:28 crc kubenswrapper[4793]: I0217 21:20:28.716857 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerDied","Data":"93ce56797b5467c19088036541326104bd3b8cbda2b0a167e504cdc066464af0"} Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.309371 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.313330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-utilities\") pod \"2bd129d2-f974-48ac-b469-d89e78cfe154\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.313375 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmjs\" (UniqueName: \"kubernetes.io/projected/2bd129d2-f974-48ac-b469-d89e78cfe154-kube-api-access-2pmjs\") pod \"2bd129d2-f974-48ac-b469-d89e78cfe154\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.313419 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-catalog-content\") pod \"2bd129d2-f974-48ac-b469-d89e78cfe154\" (UID: \"2bd129d2-f974-48ac-b469-d89e78cfe154\") " Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.314120 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-utilities" (OuterVolumeSpecName: "utilities") pod "2bd129d2-f974-48ac-b469-d89e78cfe154" (UID: "2bd129d2-f974-48ac-b469-d89e78cfe154"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.320978 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd129d2-f974-48ac-b469-d89e78cfe154-kube-api-access-2pmjs" (OuterVolumeSpecName: "kube-api-access-2pmjs") pod "2bd129d2-f974-48ac-b469-d89e78cfe154" (UID: "2bd129d2-f974-48ac-b469-d89e78cfe154"). InnerVolumeSpecName "kube-api-access-2pmjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.416537 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.416580 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmjs\" (UniqueName: \"kubernetes.io/projected/2bd129d2-f974-48ac-b469-d89e78cfe154-kube-api-access-2pmjs\") on node \"crc\" DevicePath \"\"" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.429250 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bd129d2-f974-48ac-b469-d89e78cfe154" (UID: "2bd129d2-f974-48ac-b469-d89e78cfe154"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.518168 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd129d2-f974-48ac-b469-d89e78cfe154-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.728676 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpsbb" event={"ID":"2bd129d2-f974-48ac-b469-d89e78cfe154","Type":"ContainerDied","Data":"ce1b5dc3bf9e0debced9a8c5d76c0285d1358db1fd06ac6b4ab50a4e80784360"} Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.728959 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpsbb" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.729061 4793 scope.go:117] "RemoveContainer" containerID="93ce56797b5467c19088036541326104bd3b8cbda2b0a167e504cdc066464af0" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.749875 4793 scope.go:117] "RemoveContainer" containerID="11703e7609fdc0be2cd69aa0022816307a4f76e89b5cfec65f7900f875a071c4" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.751845 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpsbb"] Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.781165 4793 scope.go:117] "RemoveContainer" containerID="d71f314e5ced6fe478f28f246a37ff8561e41e770e162ead6868ffcf79fd0ed6" Feb 17 21:20:29 crc kubenswrapper[4793]: I0217 21:20:29.782573 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpsbb"] Feb 17 21:20:31 crc kubenswrapper[4793]: I0217 21:20:31.559515 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" path="/var/lib/kubelet/pods/2bd129d2-f974-48ac-b469-d89e78cfe154/volumes" Feb 17 21:20:33 crc kubenswrapper[4793]: I0217 21:20:33.538725 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:20:33 crc kubenswrapper[4793]: I0217 21:20:33.539175 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:20:33 crc kubenswrapper[4793]: E0217 21:20:33.539437 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:20:34 crc kubenswrapper[4793]: I0217 21:20:34.791936 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3"} Feb 17 21:20:35 crc kubenswrapper[4793]: I0217 21:20:35.596750 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:20:35 crc kubenswrapper[4793]: I0217 21:20:35.597102 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:20:35 crc kubenswrapper[4793]: I0217 21:20:35.625071 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 17 21:20:35 crc kubenswrapper[4793]: I0217 21:20:35.841346 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 17 21:20:37 crc kubenswrapper[4793]: I0217 21:20:37.828271 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" exitCode=1 Feb 17 21:20:37 crc kubenswrapper[4793]: I0217 21:20:37.828348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3"} Feb 17 21:20:37 crc kubenswrapper[4793]: I0217 21:20:37.828850 4793 scope.go:117] "RemoveContainer" containerID="f2476986fb1b0be3a6a5b01ca2a6b94940c3d4f63ac98acbe527af609a9dcde1" Feb 17 21:20:37 crc kubenswrapper[4793]: I0217 21:20:37.829368 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:20:37 crc kubenswrapper[4793]: E0217 21:20:37.830004 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:20:38 crc kubenswrapper[4793]: I0217 21:20:38.845240 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:20:38 crc kubenswrapper[4793]: E0217 21:20:38.846333 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:20:40 crc kubenswrapper[4793]: I0217 21:20:40.595965 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:20:40 crc kubenswrapper[4793]: I0217 21:20:40.597263 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:20:40 crc kubenswrapper[4793]: E0217 21:20:40.597679 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:20:45 crc kubenswrapper[4793]: I0217 21:20:45.595778 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:20:45 crc kubenswrapper[4793]: I0217 21:20:45.596422 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:20:45 crc kubenswrapper[4793]: I0217 21:20:45.597939 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:20:45 crc kubenswrapper[4793]: E0217 21:20:45.598262 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:20:46 crc kubenswrapper[4793]: I0217 21:20:46.539119 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:20:46 crc kubenswrapper[4793]: E0217 21:20:46.539661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:21:00 crc kubenswrapper[4793]: I0217 21:21:00.538761 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:21:00 crc kubenswrapper[4793]: E0217 21:21:00.539751 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:21:01 crc kubenswrapper[4793]: I0217 21:21:01.539238 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:21:01 crc kubenswrapper[4793]: E0217 21:21:01.540256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:21:15 crc kubenswrapper[4793]: I0217 21:21:15.553438 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:21:15 crc kubenswrapper[4793]: I0217 21:21:15.554443 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:21:15 crc kubenswrapper[4793]: E0217 21:21:15.554587 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:21:15 crc kubenswrapper[4793]: E0217 21:21:15.555108 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:21:26 crc kubenswrapper[4793]: I0217 21:21:26.539347 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:21:26 crc kubenswrapper[4793]: E0217 21:21:26.540085 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:21:26 crc kubenswrapper[4793]: I0217 21:21:26.540427 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:21:26 crc kubenswrapper[4793]: E0217 21:21:26.540621 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:21:37 crc kubenswrapper[4793]: I0217 21:21:37.462745 4793 generic.go:334] "Generic (PLEG): container finished" podID="46d312cc-dda9-4d5e-bea8-1559405ca6b9" containerID="9be37e419ba47ae9dd2c095d5fd0e73da02be05f61b9f2dba3e81d965064595f" exitCode=0 Feb 17 21:21:37 crc kubenswrapper[4793]: I0217 21:21:37.462846 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" event={"ID":"46d312cc-dda9-4d5e-bea8-1559405ca6b9","Type":"ContainerDied","Data":"9be37e419ba47ae9dd2c095d5fd0e73da02be05f61b9f2dba3e81d965064595f"} Feb 17 21:21:37 crc kubenswrapper[4793]: I0217 21:21:37.539660 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:21:37 crc kubenswrapper[4793]: E0217 21:21:37.540221 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:21:38 crc kubenswrapper[4793]: I0217 21:21:38.539555 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:21:38 crc kubenswrapper[4793]: E0217 21:21:38.540552 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:21:38 crc kubenswrapper[4793]: I0217 21:21:38.960385 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.093170 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-secret-0\") pod \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.093295 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-combined-ca-bundle\") pod \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.093431 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-inventory\") pod \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.093477 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-ssh-key-openstack-edpm-ipam\") pod \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.093531 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jx4\" (UniqueName: \"kubernetes.io/projected/46d312cc-dda9-4d5e-bea8-1559405ca6b9-kube-api-access-c8jx4\") pod \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\" (UID: \"46d312cc-dda9-4d5e-bea8-1559405ca6b9\") " Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.106040 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d312cc-dda9-4d5e-bea8-1559405ca6b9-kube-api-access-c8jx4" (OuterVolumeSpecName: "kube-api-access-c8jx4") pod "46d312cc-dda9-4d5e-bea8-1559405ca6b9" (UID: "46d312cc-dda9-4d5e-bea8-1559405ca6b9"). InnerVolumeSpecName "kube-api-access-c8jx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.113269 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "46d312cc-dda9-4d5e-bea8-1559405ca6b9" (UID: "46d312cc-dda9-4d5e-bea8-1559405ca6b9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.122766 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-inventory" (OuterVolumeSpecName: "inventory") pod "46d312cc-dda9-4d5e-bea8-1559405ca6b9" (UID: "46d312cc-dda9-4d5e-bea8-1559405ca6b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.134134 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46d312cc-dda9-4d5e-bea8-1559405ca6b9" (UID: "46d312cc-dda9-4d5e-bea8-1559405ca6b9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.136076 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "46d312cc-dda9-4d5e-bea8-1559405ca6b9" (UID: "46d312cc-dda9-4d5e-bea8-1559405ca6b9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.197222 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jx4\" (UniqueName: \"kubernetes.io/projected/46d312cc-dda9-4d5e-bea8-1559405ca6b9-kube-api-access-c8jx4\") on node \"crc\" DevicePath \"\"" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.197263 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.197277 4793 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.197291 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.197307 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46d312cc-dda9-4d5e-bea8-1559405ca6b9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.492133 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" event={"ID":"46d312cc-dda9-4d5e-bea8-1559405ca6b9","Type":"ContainerDied","Data":"e009e3361cd9c1a39f5a4cbb9b3268fcdbaeceb12d04fa53164155fad7eb996f"} Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.492243 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e009e3361cd9c1a39f5a4cbb9b3268fcdbaeceb12d04fa53164155fad7eb996f" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.492357 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l9shm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620275 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm"] Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620742 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d312cc-dda9-4d5e-bea8-1559405ca6b9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620759 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d312cc-dda9-4d5e-bea8-1559405ca6b9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620774 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="extract-content" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620784 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="extract-content" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620808 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="extract-utilities" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620817 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="extract-utilities" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620832 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="extract-content" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620842 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="extract-content" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620867 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620878 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620899 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="extract-content" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620910 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="extract-content" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620929 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="extract-utilities" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620938 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="extract-utilities" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620962 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.620971 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.620993 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="extract-utilities" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.621002 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="extract-utilities" Feb 17 21:21:39 crc kubenswrapper[4793]: E0217 21:21:39.621016 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.621024 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.621266 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d312cc-dda9-4d5e-bea8-1559405ca6b9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.621301 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="335081b9-61db-4f50-b301-a587007c72e0" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.621326 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd129d2-f974-48ac-b469-d89e78cfe154" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.621345 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd88ef10-9c55-455d-80a9-1d578498f602" containerName="registry-server" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.622392 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.624638 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.625145 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.625845 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.626118 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.626321 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.627076 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.627259 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.630576 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm"] Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725279 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725332 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkj7c\" (UniqueName: \"kubernetes.io/projected/31f287e9-abcf-47b4-b249-98613eabec98-kube-api-access-wkj7c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725353 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725375 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725450 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/31f287e9-abcf-47b4-b249-98613eabec98-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.725987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.726071 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.726219 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.726360 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.726522 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.829960 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/31f287e9-abcf-47b4-b249-98613eabec98-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.830042 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.830151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.830177 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.830224 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.830936 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.831011 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.831078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.831137 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.831163 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkj7c\" (UniqueName: \"kubernetes.io/projected/31f287e9-abcf-47b4-b249-98613eabec98-kube-api-access-wkj7c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.831246 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.834024 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/31f287e9-abcf-47b4-b249-98613eabec98-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.836157 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.836183 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.836265 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.836487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.836715 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.836923 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.838375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.838597 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.848453 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.858564 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkj7c\" (UniqueName: \"kubernetes.io/projected/31f287e9-abcf-47b4-b249-98613eabec98-kube-api-access-wkj7c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gdrlm\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:39 crc kubenswrapper[4793]: I0217 21:21:39.941449 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:21:40 crc kubenswrapper[4793]: I0217 21:21:40.509808 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm"] Feb 17 21:21:41 crc kubenswrapper[4793]: I0217 21:21:41.529135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" event={"ID":"31f287e9-abcf-47b4-b249-98613eabec98","Type":"ContainerStarted","Data":"292c62d933bf5b927493a6ac7be21a667f2c53705d5a18e80380d0b47c8c5fee"} Feb 17 21:21:42 crc kubenswrapper[4793]: I0217 21:21:42.539473 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" event={"ID":"31f287e9-abcf-47b4-b249-98613eabec98","Type":"ContainerStarted","Data":"cd18430519865044f503986fb501dcf08698652f3926999f7d4e33e5422bc5d1"} Feb 17 21:21:42 crc kubenswrapper[4793]: I0217 21:21:42.557089 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" podStartSLOduration=3.045146096 podStartE2EDuration="3.557066759s" podCreationTimestamp="2026-02-17 21:21:39 +0000 UTC" firstStartedPulling="2026-02-17 21:21:40.512483941 +0000 UTC m=+4375.804182252" lastFinishedPulling="2026-02-17 21:21:41.024404564 +0000 UTC m=+4376.316102915" observedRunningTime="2026-02-17 21:21:42.555044949 +0000 UTC m=+4377.846743270" watchObservedRunningTime="2026-02-17 21:21:42.557066759 +0000 UTC m=+4377.848765070" Feb 17 21:21:48 crc kubenswrapper[4793]: I0217 21:21:48.538598 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:21:48 crc kubenswrapper[4793]: E0217 21:21:48.539774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:21:50 crc kubenswrapper[4793]: I0217 21:21:50.539650 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:21:50 crc kubenswrapper[4793]: E0217 21:21:50.540564 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:22:00 crc kubenswrapper[4793]: I0217 21:22:00.539441 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:22:00 crc kubenswrapper[4793]: E0217 21:22:00.542384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:22:02 crc kubenswrapper[4793]: I0217 21:22:02.539815 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:22:02 crc kubenswrapper[4793]: E0217 21:22:02.540505 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:22:12 crc kubenswrapper[4793]: I0217 21:22:12.540411 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:22:12 crc kubenswrapper[4793]: E0217 21:22:12.542366 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:22:13 crc kubenswrapper[4793]: I0217 21:22:13.538652 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:22:13 crc kubenswrapper[4793]: E0217 21:22:13.539031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:22:24 crc kubenswrapper[4793]: I0217 21:22:24.539466 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:22:24 crc kubenswrapper[4793]: E0217 21:22:24.540542 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:22:25 crc kubenswrapper[4793]: I0217 21:22:25.547498 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:22:25 crc kubenswrapper[4793]: E0217 21:22:25.548075 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:22:36 crc kubenswrapper[4793]: I0217 21:22:36.539847 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:22:36 crc kubenswrapper[4793]: E0217 21:22:36.541027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:22:38 crc kubenswrapper[4793]: I0217 21:22:38.539901 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:22:38 crc kubenswrapper[4793]: E0217 21:22:38.540833 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:22:49 crc kubenswrapper[4793]: I0217 21:22:49.539128 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:22:49 crc kubenswrapper[4793]: E0217 21:22:49.539952 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:22:52 crc kubenswrapper[4793]: I0217 21:22:52.539990 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:22:52 crc kubenswrapper[4793]: E0217 21:22:52.540803 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:23:01 crc kubenswrapper[4793]: I0217 21:23:01.540158 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:23:01 crc kubenswrapper[4793]: E0217 21:23:01.541061 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:23:05 crc kubenswrapper[4793]: I0217 21:23:05.551209 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:23:05 crc kubenswrapper[4793]: E0217 21:23:05.552171 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:23:15 crc kubenswrapper[4793]: I0217 21:23:15.546891 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:23:15 crc kubenswrapper[4793]: E0217 21:23:15.547882 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:23:17 crc kubenswrapper[4793]: I0217 21:23:17.538421 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:23:17 crc kubenswrapper[4793]: E0217 21:23:17.539128 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:23:27 crc kubenswrapper[4793]: I0217 21:23:27.539561 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:23:27 crc kubenswrapper[4793]: E0217 21:23:27.540481 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:23:32 crc kubenswrapper[4793]: I0217 21:23:32.539317 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:23:32 crc kubenswrapper[4793]: E0217 21:23:32.540190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:23:42 crc kubenswrapper[4793]: I0217 21:23:42.538747 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:23:42 crc kubenswrapper[4793]: E0217 21:23:42.539425 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:23:45 crc kubenswrapper[4793]: I0217 21:23:45.549570 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:23:45 crc kubenswrapper[4793]: E0217 21:23:45.550322 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:23:53 crc kubenswrapper[4793]: I0217 21:23:53.539389 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:23:53 crc kubenswrapper[4793]: E0217 21:23:53.540191 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:23:56 crc kubenswrapper[4793]: I0217 21:23:56.539150 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:23:56 crc kubenswrapper[4793]: E0217 21:23:56.540013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:24:08 crc kubenswrapper[4793]: I0217 21:24:08.538426 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:24:08 crc kubenswrapper[4793]: E0217 21:24:08.539131 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:24:11 crc kubenswrapper[4793]: I0217 21:24:11.540489 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:24:11 crc kubenswrapper[4793]: E0217 21:24:11.541873 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:24:22 crc kubenswrapper[4793]: I0217 21:24:22.539642 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:24:22 crc kubenswrapper[4793]: E0217 21:24:22.540299 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:24:26 crc kubenswrapper[4793]: I0217 21:24:26.427766 4793 generic.go:334] "Generic (PLEG): container finished" podID="31f287e9-abcf-47b4-b249-98613eabec98" containerID="cd18430519865044f503986fb501dcf08698652f3926999f7d4e33e5422bc5d1" exitCode=0 Feb 17 21:24:26 crc kubenswrapper[4793]: I0217 21:24:26.427909 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" event={"ID":"31f287e9-abcf-47b4-b249-98613eabec98","Type":"ContainerDied","Data":"cd18430519865044f503986fb501dcf08698652f3926999f7d4e33e5422bc5d1"} Feb 17 21:24:26 crc kubenswrapper[4793]: I0217 21:24:26.540059 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:24:27 crc kubenswrapper[4793]: I0217 21:24:27.454545 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"35d167af5fbd2098becad4dda365d78b9584628a998e38b902b940db6917fb20"} Feb 17 21:24:27 crc kubenswrapper[4793]: I0217 21:24:27.999258 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123343 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/31f287e9-abcf-47b4-b249-98613eabec98-nova-extra-config-0\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123421 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-ssh-key-openstack-edpm-ipam\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123457 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-inventory\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-1\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123644 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-2\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123667 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-3\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123705 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-1\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123737 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-0\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123775 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-0\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123811 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-combined-ca-bundle\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.123845 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkj7c\" (UniqueName: \"kubernetes.io/projected/31f287e9-abcf-47b4-b249-98613eabec98-kube-api-access-wkj7c\") pod \"31f287e9-abcf-47b4-b249-98613eabec98\" (UID: \"31f287e9-abcf-47b4-b249-98613eabec98\") " Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.144586 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f287e9-abcf-47b4-b249-98613eabec98-kube-api-access-wkj7c" (OuterVolumeSpecName: "kube-api-access-wkj7c") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "kube-api-access-wkj7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.202932 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.227706 4793 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.227732 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkj7c\" (UniqueName: \"kubernetes.io/projected/31f287e9-abcf-47b4-b249-98613eabec98-kube-api-access-wkj7c\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.235209 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.238917 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.239902 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.255854 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.263806 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f287e9-abcf-47b4-b249-98613eabec98-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.282303 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.284932 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-inventory" (OuterVolumeSpecName: "inventory") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.298438 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.305857 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "31f287e9-abcf-47b4-b249-98613eabec98" (UID: "31f287e9-abcf-47b4-b249-98613eabec98"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329453 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329481 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329492 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329500 4793 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329508 4793 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329541 4793 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329551 4793 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/31f287e9-abcf-47b4-b249-98613eabec98-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329560 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.329578 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31f287e9-abcf-47b4-b249-98613eabec98-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.466106 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" event={"ID":"31f287e9-abcf-47b4-b249-98613eabec98","Type":"ContainerDied","Data":"292c62d933bf5b927493a6ac7be21a667f2c53705d5a18e80380d0b47c8c5fee"} Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.466427 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="292c62d933bf5b927493a6ac7be21a667f2c53705d5a18e80380d0b47c8c5fee" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.466237 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gdrlm" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.595006 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd"] Feb 17 21:24:28 crc kubenswrapper[4793]: E0217 21:24:28.595657 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f287e9-abcf-47b4-b249-98613eabec98" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.595682 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f287e9-abcf-47b4-b249-98613eabec98" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.596099 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f287e9-abcf-47b4-b249-98613eabec98" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.597849 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.600651 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.600834 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.601055 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zcmz" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.604908 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.605226 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.615672 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd"] Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.635907 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.635982 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.636058 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.636121 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.636252 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.636287 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spqh\" (UniqueName: \"kubernetes.io/projected/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-kube-api-access-9spqh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.637459 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.739768 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.739886 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.739985 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.740028 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9spqh\" (UniqueName: \"kubernetes.io/projected/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-kube-api-access-9spqh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.740063 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.740263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.740330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.744405 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.744894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.745826 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.746581 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.746650 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.747489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.776061 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spqh\" (UniqueName: \"kubernetes.io/projected/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-kube-api-access-9spqh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qflkd\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:28 crc kubenswrapper[4793]: I0217 21:24:28.932501 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:24:29 crc kubenswrapper[4793]: I0217 21:24:29.540014 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd"] Feb 17 21:24:30 crc kubenswrapper[4793]: I0217 21:24:30.495649 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" event={"ID":"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d","Type":"ContainerStarted","Data":"e7081e29a1e5d136b426f5fd70307b3bc8a9a4fca427fa323ac65d5abf604936"} Feb 17 21:24:31 crc kubenswrapper[4793]: I0217 21:24:31.511515 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" event={"ID":"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d","Type":"ContainerStarted","Data":"85dff4e388917a6e8ae7b1c324a5da3c1855e082a6498800b3181a8c25f76ea8"} Feb 17 21:24:31 crc kubenswrapper[4793]: I0217 21:24:31.538225 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" podStartSLOduration=2.821638394 podStartE2EDuration="3.538189497s" podCreationTimestamp="2026-02-17 21:24:28 +0000 UTC" firstStartedPulling="2026-02-17 21:24:29.561532633 +0000 UTC m=+4544.853230944" lastFinishedPulling="2026-02-17 21:24:30.278083736 +0000 UTC m=+4545.569782047" observedRunningTime="2026-02-17 21:24:31.534497087 +0000 UTC m=+4546.826195448" watchObservedRunningTime="2026-02-17 21:24:31.538189497 +0000 UTC m=+4546.829887858" Feb 17 21:24:35 crc kubenswrapper[4793]: I0217 21:24:35.547373 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:24:35 crc kubenswrapper[4793]: E0217 21:24:35.548324 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:24:49 crc kubenswrapper[4793]: I0217 21:24:49.540382 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:24:49 crc kubenswrapper[4793]: E0217 21:24:49.541506 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:00 crc kubenswrapper[4793]: I0217 21:25:00.538457 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:25:00 crc kubenswrapper[4793]: E0217 21:25:00.539440 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:11 crc kubenswrapper[4793]: I0217 21:25:11.540559 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:25:11 crc kubenswrapper[4793]: E0217 21:25:11.541907 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:25 crc kubenswrapper[4793]: I0217 21:25:25.544872 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:25:25 crc kubenswrapper[4793]: E0217 21:25:25.545628 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:37 crc kubenswrapper[4793]: I0217 21:25:37.540710 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:25:38 crc kubenswrapper[4793]: I0217 21:25:38.299270 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653"} Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.709448 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxg64"] Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.713846 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.722678 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxg64"] Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.752308 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d87w8\" (UniqueName: \"kubernetes.io/projected/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-kube-api-access-d87w8\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.752372 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-utilities\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.752461 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-catalog-content\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.853937 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d87w8\" (UniqueName: \"kubernetes.io/projected/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-kube-api-access-d87w8\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.854204 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-utilities\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.854332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-catalog-content\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.854631 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-utilities\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.854762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-catalog-content\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:39 crc kubenswrapper[4793]: I0217 21:25:39.879210 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d87w8\" (UniqueName: \"kubernetes.io/projected/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-kube-api-access-d87w8\") pod \"redhat-marketplace-gxg64\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.047286 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.322728 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" exitCode=1 Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.322775 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653"} Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.322812 4793 scope.go:117] "RemoveContainer" containerID="a69a650f1b8cf6079bf6e43895158c3026f78098b1ff4d27c6bc0eaf7476f3e3" Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.323488 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:25:40 crc kubenswrapper[4793]: E0217 21:25:40.323835 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.567385 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxg64"] Feb 17 21:25:40 crc kubenswrapper[4793]: I0217 21:25:40.596395 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:25:41 crc kubenswrapper[4793]: I0217 21:25:41.333804 4793 generic.go:334] "Generic (PLEG): container finished" podID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerID="9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1" exitCode=0 Feb 17 21:25:41 crc kubenswrapper[4793]: I0217 21:25:41.333904 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerDied","Data":"9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1"} Feb 17 21:25:41 crc kubenswrapper[4793]: I0217 21:25:41.333938 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerStarted","Data":"79cb0b97405ed199a960a13a8f44dc592f3d93b799290357e528d09b3d1b35f4"} Feb 17 21:25:41 crc kubenswrapper[4793]: I0217 21:25:41.336107 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:25:41 crc kubenswrapper[4793]: I0217 21:25:41.336345 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:25:41 crc kubenswrapper[4793]: E0217 21:25:41.336402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:42 crc kubenswrapper[4793]: I0217 21:25:42.348784 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerStarted","Data":"b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c"} Feb 17 21:25:43 crc kubenswrapper[4793]: I0217 21:25:43.362031 4793 generic.go:334] "Generic (PLEG): container finished" podID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerID="b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c" exitCode=0 Feb 17 21:25:43 crc kubenswrapper[4793]: I0217 21:25:43.362112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerDied","Data":"b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c"} Feb 17 21:25:44 crc kubenswrapper[4793]: I0217 21:25:44.374762 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerStarted","Data":"85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28"} Feb 17 21:25:44 crc kubenswrapper[4793]: I0217 21:25:44.396419 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxg64" podStartSLOduration=2.955426623 podStartE2EDuration="5.396399617s" podCreationTimestamp="2026-02-17 21:25:39 +0000 UTC" firstStartedPulling="2026-02-17 21:25:41.336121362 +0000 UTC m=+4616.627819673" lastFinishedPulling="2026-02-17 21:25:43.777094346 +0000 UTC m=+4619.068792667" observedRunningTime="2026-02-17 21:25:44.391317952 +0000 UTC m=+4619.683016273" watchObservedRunningTime="2026-02-17 21:25:44.396399617 +0000 UTC m=+4619.688097938" Feb 17 21:25:45 crc kubenswrapper[4793]: I0217 21:25:45.596661 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:25:45 crc kubenswrapper[4793]: I0217 21:25:45.596800 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:25:45 crc kubenswrapper[4793]: I0217 21:25:45.596814 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:25:45 crc kubenswrapper[4793]: I0217 21:25:45.597592 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:25:45 crc kubenswrapper[4793]: E0217 21:25:45.597868 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:25:50 crc kubenswrapper[4793]: I0217 21:25:50.047723 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:50 crc kubenswrapper[4793]: I0217 21:25:50.048289 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:50 crc kubenswrapper[4793]: I0217 21:25:50.109276 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:50 crc kubenswrapper[4793]: I0217 21:25:50.497107 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:50 crc kubenswrapper[4793]: I0217 21:25:50.557268 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxg64"] Feb 17 21:25:52 crc kubenswrapper[4793]: I0217 21:25:52.464468 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxg64" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="registry-server" containerID="cri-o://85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28" gracePeriod=2 Feb 17 21:25:52 crc kubenswrapper[4793]: I0217 21:25:52.911146 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.043576 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-utilities\") pod \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.043703 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d87w8\" (UniqueName: \"kubernetes.io/projected/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-kube-api-access-d87w8\") pod \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.043754 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-catalog-content\") pod \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\" (UID: \"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e\") " Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.044371 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-utilities" (OuterVolumeSpecName: "utilities") pod "a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" (UID: "a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.050025 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-kube-api-access-d87w8" (OuterVolumeSpecName: "kube-api-access-d87w8") pod "a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" (UID: "a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e"). InnerVolumeSpecName "kube-api-access-d87w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.079907 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" (UID: "a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.147124 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.147182 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.147203 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d87w8\" (UniqueName: \"kubernetes.io/projected/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e-kube-api-access-d87w8\") on node \"crc\" DevicePath \"\"" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.485031 4793 generic.go:334] "Generic (PLEG): container finished" podID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerID="85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28" exitCode=0 Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.485092 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerDied","Data":"85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28"} Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.485424 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxg64" event={"ID":"a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e","Type":"ContainerDied","Data":"79cb0b97405ed199a960a13a8f44dc592f3d93b799290357e528d09b3d1b35f4"} Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.485154 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxg64" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.485517 4793 scope.go:117] "RemoveContainer" containerID="85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.504265 4793 scope.go:117] "RemoveContainer" containerID="b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.523489 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxg64"] Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.541447 4793 scope.go:117] "RemoveContainer" containerID="9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1" Feb 17 21:25:53 crc kubenswrapper[4793]: I0217 21:25:53.586282 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxg64"] Feb 17 21:25:54 crc kubenswrapper[4793]: I0217 21:25:54.094371 4793 scope.go:117] "RemoveContainer" containerID="85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28" Feb 17 21:25:54 crc kubenswrapper[4793]: E0217 21:25:54.095369 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28\": container with ID starting with 85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28 not found: ID does not exist" containerID="85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28" Feb 17 21:25:54 crc kubenswrapper[4793]: I0217 21:25:54.095426 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28"} err="failed to get container status \"85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28\": rpc error: code = NotFound desc = could not find container \"85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28\": container with ID starting with 85724df0d9f687926c514f301d64f935eea510bf1983b4954e09a71fda731c28 not found: ID does not exist" Feb 17 21:25:54 crc kubenswrapper[4793]: I0217 21:25:54.095460 4793 scope.go:117] "RemoveContainer" containerID="b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c" Feb 17 21:25:54 crc kubenswrapper[4793]: E0217 21:25:54.096259 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c\": container with ID starting with b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c not found: ID does not exist" containerID="b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c" Feb 17 21:25:54 crc kubenswrapper[4793]: I0217 21:25:54.096308 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c"} err="failed to get container status \"b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c\": rpc error: code = NotFound desc = could not find container \"b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c\": container with ID starting with b78e29fbc62c1b684c7740a58903f13e5d81ef82435bb29f5728f08a3294802c not found: ID does not exist" Feb 17 21:25:54 crc kubenswrapper[4793]: I0217 21:25:54.096335 4793 scope.go:117] "RemoveContainer" containerID="9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1" Feb 17 21:25:54 crc kubenswrapper[4793]: E0217 21:25:54.097115 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1\": container with ID starting with 9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1 not found: ID does not exist" containerID="9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1" Feb 17 21:25:54 crc kubenswrapper[4793]: I0217 21:25:54.097161 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1"} err="failed to get container status \"9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1\": rpc error: code = NotFound desc = could not find container \"9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1\": container with ID starting with 9a4c61dd398d10227c99b438ff02a062fa6588610ca15b83bc65a4b9239b60d1 not found: ID does not exist" Feb 17 21:25:55 crc kubenswrapper[4793]: I0217 21:25:55.556942 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" path="/var/lib/kubelet/pods/a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e/volumes" Feb 17 21:25:57 crc kubenswrapper[4793]: I0217 21:25:57.539023 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:25:57 crc kubenswrapper[4793]: E0217 21:25:57.539784 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:26:09 crc kubenswrapper[4793]: I0217 21:26:09.540550 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:26:09 crc kubenswrapper[4793]: E0217 21:26:09.541705 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:26:21 crc kubenswrapper[4793]: I0217 21:26:21.540272 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:26:21 crc kubenswrapper[4793]: E0217 21:26:21.541324 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:26:32 crc kubenswrapper[4793]: I0217 21:26:32.539833 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:26:32 crc kubenswrapper[4793]: E0217 21:26:32.541722 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:26:44 crc kubenswrapper[4793]: I0217 21:26:44.077482 4793 generic.go:334] "Generic (PLEG): container finished" podID="35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" containerID="85dff4e388917a6e8ae7b1c324a5da3c1855e082a6498800b3181a8c25f76ea8" exitCode=0 Feb 17 21:26:44 crc kubenswrapper[4793]: I0217 21:26:44.077612 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" event={"ID":"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d","Type":"ContainerDied","Data":"85dff4e388917a6e8ae7b1c324a5da3c1855e082a6498800b3181a8c25f76ea8"} Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.576441 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686321 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-telemetry-combined-ca-bundle\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686408 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-0\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686478 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-2\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686527 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-1\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9spqh\" (UniqueName: \"kubernetes.io/projected/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-kube-api-access-9spqh\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686814 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ssh-key-openstack-edpm-ipam\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.686864 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-inventory\") pod \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\" (UID: \"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d\") " Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.694404 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-kube-api-access-9spqh" (OuterVolumeSpecName: "kube-api-access-9spqh") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "kube-api-access-9spqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.698010 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.726161 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-inventory" (OuterVolumeSpecName: "inventory") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.752671 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.754038 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.755542 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.758997 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" (UID: "35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789713 4793 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789741 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789752 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789761 4793 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789769 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9spqh\" (UniqueName: \"kubernetes.io/projected/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-kube-api-access-9spqh\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789777 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:45 crc kubenswrapper[4793]: I0217 21:26:45.789786 4793 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 21:26:46 crc kubenswrapper[4793]: I0217 21:26:46.104348 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" event={"ID":"35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d","Type":"ContainerDied","Data":"e7081e29a1e5d136b426f5fd70307b3bc8a9a4fca427fa323ac65d5abf604936"} Feb 17 21:26:46 crc kubenswrapper[4793]: I0217 21:26:46.104440 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qflkd" Feb 17 21:26:46 crc kubenswrapper[4793]: I0217 21:26:46.104452 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7081e29a1e5d136b426f5fd70307b3bc8a9a4fca427fa323ac65d5abf604936" Feb 17 21:26:46 crc kubenswrapper[4793]: I0217 21:26:46.538945 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:26:46 crc kubenswrapper[4793]: E0217 21:26:46.539457 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:26:50 crc kubenswrapper[4793]: I0217 21:26:50.101906 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:26:50 crc kubenswrapper[4793]: I0217 21:26:50.102448 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:27:01 crc kubenswrapper[4793]: I0217 21:27:01.540759 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:27:01 crc kubenswrapper[4793]: E0217 21:27:01.562410 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:27:16 crc kubenswrapper[4793]: I0217 21:27:16.540008 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:27:16 crc kubenswrapper[4793]: E0217 21:27:16.541147 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.101852 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.102924 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.286291 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 17 21:27:20 crc kubenswrapper[4793]: E0217 21:27:20.286781 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="extract-utilities" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.286804 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="extract-utilities" Feb 17 21:27:20 crc kubenswrapper[4793]: E0217 21:27:20.286827 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="registry-server" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.286835 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="registry-server" Feb 17 21:27:20 crc kubenswrapper[4793]: E0217 21:27:20.286881 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.286890 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 21:27:20 crc kubenswrapper[4793]: E0217 21:27:20.286905 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="extract-content" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.286912 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="extract-content" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.287122 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6562fc9-ad8e-46ec-99d0-071dd4d0ca8e" containerName="registry-server" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.287156 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.288446 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.291404 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.313949 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.368955 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.371079 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.376161 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.379350 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.435786 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.437389 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.441045 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.449604 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468454 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-run\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468654 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-config-data\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468674 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468701 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468735 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-scripts\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468751 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468773 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468788 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468833 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468849 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468870 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-sys\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468898 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468912 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468926 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468940 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468958 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468978 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.468995 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469025 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-lib-modules\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469054 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469071 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469095 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469114 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-dev\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469128 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-run\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469147 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgpzq\" (UniqueName: \"kubernetes.io/projected/f8921b7f-c527-4806-9d5f-16f01ddad8ef-kube-api-access-tgpzq\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469166 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469187 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.469214 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv54g\" (UniqueName: \"kubernetes.io/projected/d6098983-9a93-4433-9f60-80c300c88a3e-kube-api-access-sv54g\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.570932 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-sys\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571015 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571046 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571076 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-sys\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571100 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571193 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571254 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571382 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571397 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571434 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571472 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571522 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571587 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571621 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-lib-modules\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571682 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571733 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571755 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571778 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571824 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571831 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571862 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-lib-modules\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571873 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.571915 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572012 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572040 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-dev\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-run\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572086 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgpzq\" (UniqueName: \"kubernetes.io/projected/f8921b7f-c527-4806-9d5f-16f01ddad8ef-kube-api-access-tgpzq\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572115 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572146 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572170 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d97ht\" (UniqueName: \"kubernetes.io/projected/8a91b201-489b-443b-b3d3-3b435bed899a-kube-api-access-d97ht\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572187 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572197 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572228 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572246 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-dev\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv54g\" (UniqueName: \"kubernetes.io/projected/d6098983-9a93-4433-9f60-80c300c88a3e-kube-api-access-sv54g\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572307 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-run\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572334 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572354 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572356 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572226 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-run\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572505 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-run\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572625 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572730 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-config-data\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572753 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572774 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572817 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572857 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-scripts\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572878 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572913 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572935 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.572961 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.573014 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.573061 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.573086 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.573206 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.573249 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-nvme\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.573276 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.574786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.574860 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f8921b7f-c527-4806-9d5f-16f01ddad8ef-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.574899 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6098983-9a93-4433-9f60-80c300c88a3e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.578960 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.578995 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.582713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.584487 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.584571 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-config-data-custom\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.588364 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-config-data\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.592734 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgpzq\" (UniqueName: \"kubernetes.io/projected/f8921b7f-c527-4806-9d5f-16f01ddad8ef-kube-api-access-tgpzq\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.592803 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv54g\" (UniqueName: \"kubernetes.io/projected/d6098983-9a93-4433-9f60-80c300c88a3e-kube-api-access-sv54g\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.593817 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6098983-9a93-4433-9f60-80c300c88a3e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"d6098983-9a93-4433-9f60-80c300c88a3e\") " pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.593917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8921b7f-c527-4806-9d5f-16f01ddad8ef-scripts\") pod \"cinder-backup-0\" (UID: \"f8921b7f-c527-4806-9d5f-16f01ddad8ef\") " pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.611263 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676138 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676203 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676238 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676376 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676401 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676457 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676520 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d97ht\" (UniqueName: \"kubernetes.io/projected/8a91b201-489b-443b-b3d3-3b435bed899a-kube-api-access-d97ht\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676552 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676605 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676635 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676656 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676726 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676797 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.676887 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.677097 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.677147 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.677455 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.677894 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.677933 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.678250 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.679042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.679653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.679751 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.680248 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8a91b201-489b-443b-b3d3-3b435bed899a-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.681452 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.681997 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.682826 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.683848 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a91b201-489b-443b-b3d3-3b435bed899a-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.695834 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d97ht\" (UniqueName: \"kubernetes.io/projected/8a91b201-489b-443b-b3d3-3b435bed899a-kube-api-access-d97ht\") pod \"cinder-volume-nfs-2-0\" (UID: \"8a91b201-489b-443b-b3d3-3b435bed899a\") " pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.698261 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:20 crc kubenswrapper[4793]: I0217 21:27:20.761446 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:21 crc kubenswrapper[4793]: I0217 21:27:21.216339 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 17 21:27:21 crc kubenswrapper[4793]: I0217 21:27:21.311358 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 17 21:27:21 crc kubenswrapper[4793]: W0217 21:27:21.367756 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6098983_9a93_4433_9f60_80c300c88a3e.slice/crio-6d2e5aee11c993d7dd6a2a8a952c61a1ebbec64a215a5eeef2d3aa0886b230d3 WatchSource:0}: Error finding container 6d2e5aee11c993d7dd6a2a8a952c61a1ebbec64a215a5eeef2d3aa0886b230d3: Status 404 returned error can't find the container with id 6d2e5aee11c993d7dd6a2a8a952c61a1ebbec64a215a5eeef2d3aa0886b230d3 Feb 17 21:27:21 crc kubenswrapper[4793]: I0217 21:27:21.403210 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 17 21:27:21 crc kubenswrapper[4793]: W0217 21:27:21.407751 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a91b201_489b_443b_b3d3_3b435bed899a.slice/crio-dd393943e552b50196d1207c5e5906b17adc477dc507f1b54c1c5cdb07523e6e WatchSource:0}: Error finding container dd393943e552b50196d1207c5e5906b17adc477dc507f1b54c1c5cdb07523e6e: Status 404 returned error can't find the container with id dd393943e552b50196d1207c5e5906b17adc477dc507f1b54c1c5cdb07523e6e Feb 17 21:27:21 crc kubenswrapper[4793]: I0217 21:27:21.535857 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"8a91b201-489b-443b-b3d3-3b435bed899a","Type":"ContainerStarted","Data":"dd393943e552b50196d1207c5e5906b17adc477dc507f1b54c1c5cdb07523e6e"} Feb 17 21:27:21 crc kubenswrapper[4793]: I0217 21:27:21.554186 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f8921b7f-c527-4806-9d5f-16f01ddad8ef","Type":"ContainerStarted","Data":"3ce278e933d0e930ad9de1cae4f91450f370c2042ef544826815480f3bbe8ffd"} Feb 17 21:27:21 crc kubenswrapper[4793]: I0217 21:27:21.554237 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"d6098983-9a93-4433-9f60-80c300c88a3e","Type":"ContainerStarted","Data":"6d2e5aee11c993d7dd6a2a8a952c61a1ebbec64a215a5eeef2d3aa0886b230d3"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.552506 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f8921b7f-c527-4806-9d5f-16f01ddad8ef","Type":"ContainerStarted","Data":"94e347908b682854fa5d61951d8b49f4a59d9c6e50136ff7b7cb2112208a95c3"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.553042 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"f8921b7f-c527-4806-9d5f-16f01ddad8ef","Type":"ContainerStarted","Data":"2174c8a90561f7cfd95d7eeab3302f030a925fdaff1bb9c453c63c8fc58b80eb"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.556433 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"d6098983-9a93-4433-9f60-80c300c88a3e","Type":"ContainerStarted","Data":"12f6d5f75dbec4f4d506ce8dd2f8859be0058a8bfcffbaa3c5447434e0bf603c"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.556459 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"d6098983-9a93-4433-9f60-80c300c88a3e","Type":"ContainerStarted","Data":"e2fcf90b1a497e01158ff78ff08aa9301aeac54271c6ba579924605ac78bc4c5"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.558847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"8a91b201-489b-443b-b3d3-3b435bed899a","Type":"ContainerStarted","Data":"8107c304cf14fa9180cb3cf295242fe7b0c0ef5f61589bf69a1482b8085aacfd"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.558880 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"8a91b201-489b-443b-b3d3-3b435bed899a","Type":"ContainerStarted","Data":"340da91d3aa69c302be941fd8e514286436d4000a045ee9d318fedc07a320f12"} Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.608210 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.41366755 podStartE2EDuration="2.608184094s" podCreationTimestamp="2026-02-17 21:27:20 +0000 UTC" firstStartedPulling="2026-02-17 21:27:21.219024069 +0000 UTC m=+4716.510722380" lastFinishedPulling="2026-02-17 21:27:21.413540623 +0000 UTC m=+4716.705238924" observedRunningTime="2026-02-17 21:27:22.578580366 +0000 UTC m=+4717.870278677" watchObservedRunningTime="2026-02-17 21:27:22.608184094 +0000 UTC m=+4717.899882395" Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.631164 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.403791907 podStartE2EDuration="2.631138059s" podCreationTimestamp="2026-02-17 21:27:20 +0000 UTC" firstStartedPulling="2026-02-17 21:27:21.41060276 +0000 UTC m=+4716.702301071" lastFinishedPulling="2026-02-17 21:27:21.637948912 +0000 UTC m=+4716.929647223" observedRunningTime="2026-02-17 21:27:22.60191387 +0000 UTC m=+4717.893612221" watchObservedRunningTime="2026-02-17 21:27:22.631138059 +0000 UTC m=+4717.922836370" Feb 17 21:27:22 crc kubenswrapper[4793]: I0217 21:27:22.640272 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.370372366 podStartE2EDuration="2.640254703s" podCreationTimestamp="2026-02-17 21:27:20 +0000 UTC" firstStartedPulling="2026-02-17 21:27:21.36992346 +0000 UTC m=+4716.661621771" lastFinishedPulling="2026-02-17 21:27:21.639805797 +0000 UTC m=+4716.931504108" observedRunningTime="2026-02-17 21:27:22.639228348 +0000 UTC m=+4717.930926659" watchObservedRunningTime="2026-02-17 21:27:22.640254703 +0000 UTC m=+4717.931953014" Feb 17 21:27:25 crc kubenswrapper[4793]: I0217 21:27:25.612263 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 17 21:27:25 crc kubenswrapper[4793]: I0217 21:27:25.699212 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:25 crc kubenswrapper[4793]: I0217 21:27:25.762877 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:30 crc kubenswrapper[4793]: I0217 21:27:30.804514 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 17 21:27:30 crc kubenswrapper[4793]: I0217 21:27:30.923159 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 17 21:27:31 crc kubenswrapper[4793]: I0217 21:27:31.009241 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 17 21:27:31 crc kubenswrapper[4793]: I0217 21:27:31.539126 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:27:31 crc kubenswrapper[4793]: E0217 21:27:31.539566 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:27:46 crc kubenswrapper[4793]: I0217 21:27:46.539332 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:27:46 crc kubenswrapper[4793]: E0217 21:27:46.540141 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.102764 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.103358 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.103401 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.104124 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35d167af5fbd2098becad4dda365d78b9584628a998e38b902b940db6917fb20"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.104175 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://35d167af5fbd2098becad4dda365d78b9584628a998e38b902b940db6917fb20" gracePeriod=600 Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.916927 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="35d167af5fbd2098becad4dda365d78b9584628a998e38b902b940db6917fb20" exitCode=0 Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.917055 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"35d167af5fbd2098becad4dda365d78b9584628a998e38b902b940db6917fb20"} Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.917561 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611"} Feb 17 21:27:50 crc kubenswrapper[4793]: I0217 21:27:50.917601 4793 scope.go:117] "RemoveContainer" containerID="ad3c8e19b434a9e463b7e18317ce98a8878350f79303f0feed946cfd828131f6" Feb 17 21:27:59 crc kubenswrapper[4793]: I0217 21:27:59.539780 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:27:59 crc kubenswrapper[4793]: E0217 21:27:59.541214 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:28:14 crc kubenswrapper[4793]: I0217 21:28:14.539648 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:28:14 crc kubenswrapper[4793]: E0217 21:28:14.540451 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:28:27 crc kubenswrapper[4793]: I0217 21:28:27.539864 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:28:27 crc kubenswrapper[4793]: E0217 21:28:27.541347 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:28:38 crc kubenswrapper[4793]: I0217 21:28:38.539102 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:28:38 crc kubenswrapper[4793]: E0217 21:28:38.539833 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:28:49 crc kubenswrapper[4793]: I0217 21:28:49.539398 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:28:49 crc kubenswrapper[4793]: E0217 21:28:49.540693 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:29:03 crc kubenswrapper[4793]: I0217 21:29:03.539728 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:29:03 crc kubenswrapper[4793]: E0217 21:29:03.541577 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:29:18 crc kubenswrapper[4793]: I0217 21:29:18.539527 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:29:18 crc kubenswrapper[4793]: E0217 21:29:18.540919 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:29:29 crc kubenswrapper[4793]: I0217 21:29:29.538863 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:29:29 crc kubenswrapper[4793]: E0217 21:29:29.539774 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:29:43 crc kubenswrapper[4793]: I0217 21:29:43.539733 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:29:43 crc kubenswrapper[4793]: E0217 21:29:43.540482 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:29:50 crc kubenswrapper[4793]: I0217 21:29:50.102143 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:29:50 crc kubenswrapper[4793]: I0217 21:29:50.102789 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:29:56 crc kubenswrapper[4793]: I0217 21:29:56.539634 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:29:56 crc kubenswrapper[4793]: E0217 21:29:56.540601 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.175350 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6"] Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.178048 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.192436 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6"] Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.227814 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.228247 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.328928 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1968df33-d74f-4502-9da2-ecfc097040fb-secret-volume\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.328966 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g284d\" (UniqueName: \"kubernetes.io/projected/1968df33-d74f-4502-9da2-ecfc097040fb-kube-api-access-g284d\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.329415 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1968df33-d74f-4502-9da2-ecfc097040fb-config-volume\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.431613 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1968df33-d74f-4502-9da2-ecfc097040fb-config-volume\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.431757 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1968df33-d74f-4502-9da2-ecfc097040fb-secret-volume\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.431781 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g284d\" (UniqueName: \"kubernetes.io/projected/1968df33-d74f-4502-9da2-ecfc097040fb-kube-api-access-g284d\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.433412 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1968df33-d74f-4502-9da2-ecfc097040fb-config-volume\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.444536 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1968df33-d74f-4502-9da2-ecfc097040fb-secret-volume\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.450960 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g284d\" (UniqueName: \"kubernetes.io/projected/1968df33-d74f-4502-9da2-ecfc097040fb-kube-api-access-g284d\") pod \"collect-profiles-29522730-kc8k6\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:00 crc kubenswrapper[4793]: I0217 21:30:00.551661 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:01 crc kubenswrapper[4793]: I0217 21:30:01.071046 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6"] Feb 17 21:30:01 crc kubenswrapper[4793]: I0217 21:30:01.340535 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" event={"ID":"1968df33-d74f-4502-9da2-ecfc097040fb","Type":"ContainerStarted","Data":"4d4027c0976f858d7670fb080f5586c92d9f3402b088880c3176fa05cb511a5b"} Feb 17 21:30:01 crc kubenswrapper[4793]: I0217 21:30:01.340583 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" event={"ID":"1968df33-d74f-4502-9da2-ecfc097040fb","Type":"ContainerStarted","Data":"f6aee118e0e33d24e028c29bf1679d4ad6ae6179decb0fa4597fc9d6a67fbbb3"} Feb 17 21:30:01 crc kubenswrapper[4793]: I0217 21:30:01.357978 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" podStartSLOduration=1.357960782 podStartE2EDuration="1.357960782s" podCreationTimestamp="2026-02-17 21:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:30:01.353589465 +0000 UTC m=+4876.645287786" watchObservedRunningTime="2026-02-17 21:30:01.357960782 +0000 UTC m=+4876.649659093" Feb 17 21:30:02 crc kubenswrapper[4793]: I0217 21:30:02.358472 4793 generic.go:334] "Generic (PLEG): container finished" podID="1968df33-d74f-4502-9da2-ecfc097040fb" containerID="4d4027c0976f858d7670fb080f5586c92d9f3402b088880c3176fa05cb511a5b" exitCode=0 Feb 17 21:30:02 crc kubenswrapper[4793]: I0217 21:30:02.358588 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" event={"ID":"1968df33-d74f-4502-9da2-ecfc097040fb","Type":"ContainerDied","Data":"4d4027c0976f858d7670fb080f5586c92d9f3402b088880c3176fa05cb511a5b"} Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.846202 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.911044 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1968df33-d74f-4502-9da2-ecfc097040fb-secret-volume\") pod \"1968df33-d74f-4502-9da2-ecfc097040fb\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.911301 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g284d\" (UniqueName: \"kubernetes.io/projected/1968df33-d74f-4502-9da2-ecfc097040fb-kube-api-access-g284d\") pod \"1968df33-d74f-4502-9da2-ecfc097040fb\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.911420 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1968df33-d74f-4502-9da2-ecfc097040fb-config-volume\") pod \"1968df33-d74f-4502-9da2-ecfc097040fb\" (UID: \"1968df33-d74f-4502-9da2-ecfc097040fb\") " Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.912668 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1968df33-d74f-4502-9da2-ecfc097040fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "1968df33-d74f-4502-9da2-ecfc097040fb" (UID: "1968df33-d74f-4502-9da2-ecfc097040fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.921862 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1968df33-d74f-4502-9da2-ecfc097040fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1968df33-d74f-4502-9da2-ecfc097040fb" (UID: "1968df33-d74f-4502-9da2-ecfc097040fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:30:03 crc kubenswrapper[4793]: I0217 21:30:03.922950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1968df33-d74f-4502-9da2-ecfc097040fb-kube-api-access-g284d" (OuterVolumeSpecName: "kube-api-access-g284d") pod "1968df33-d74f-4502-9da2-ecfc097040fb" (UID: "1968df33-d74f-4502-9da2-ecfc097040fb"). InnerVolumeSpecName "kube-api-access-g284d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.014064 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1968df33-d74f-4502-9da2-ecfc097040fb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.014104 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g284d\" (UniqueName: \"kubernetes.io/projected/1968df33-d74f-4502-9da2-ecfc097040fb-kube-api-access-g284d\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.014116 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1968df33-d74f-4502-9da2-ecfc097040fb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.387834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" event={"ID":"1968df33-d74f-4502-9da2-ecfc097040fb","Type":"ContainerDied","Data":"f6aee118e0e33d24e028c29bf1679d4ad6ae6179decb0fa4597fc9d6a67fbbb3"} Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.387917 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6aee118e0e33d24e028c29bf1679d4ad6ae6179decb0fa4597fc9d6a67fbbb3" Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.387929 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6" Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.469672 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr"] Feb 17 21:30:04 crc kubenswrapper[4793]: I0217 21:30:04.482367 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522685-vlsbr"] Feb 17 21:30:05 crc kubenswrapper[4793]: I0217 21:30:05.577536 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834a07b0-aa02-4f0a-a22a-45e8e70b3c61" path="/var/lib/kubelet/pods/834a07b0-aa02-4f0a-a22a-45e8e70b3c61/volumes" Feb 17 21:30:08 crc kubenswrapper[4793]: I0217 21:30:08.538817 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:30:08 crc kubenswrapper[4793]: E0217 21:30:08.539523 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.333469 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qrl59"] Feb 17 21:30:11 crc kubenswrapper[4793]: E0217 21:30:11.335409 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1968df33-d74f-4502-9da2-ecfc097040fb" containerName="collect-profiles" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.335444 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1968df33-d74f-4502-9da2-ecfc097040fb" containerName="collect-profiles" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.335971 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1968df33-d74f-4502-9da2-ecfc097040fb" containerName="collect-profiles" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.339636 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.347093 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrl59"] Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.513583 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-utilities\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.513943 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xvb\" (UniqueName: \"kubernetes.io/projected/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-kube-api-access-s5xvb\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.514099 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-catalog-content\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.616553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-catalog-content\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.616729 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-utilities\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.616789 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5xvb\" (UniqueName: \"kubernetes.io/projected/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-kube-api-access-s5xvb\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.617393 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-utilities\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.617545 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-catalog-content\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.641230 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5xvb\" (UniqueName: \"kubernetes.io/projected/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-kube-api-access-s5xvb\") pod \"certified-operators-qrl59\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:11 crc kubenswrapper[4793]: I0217 21:30:11.714268 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.210915 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrl59"] Feb 17 21:30:12 crc kubenswrapper[4793]: W0217 21:30:12.218970 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2d23dbf_7e58_4dcb_a535_b7aa5ce7889e.slice/crio-56787b04ad5f895e38b00f04c9a2d6272158fa39928fb8c635f3a89e51efa9cc WatchSource:0}: Error finding container 56787b04ad5f895e38b00f04c9a2d6272158fa39928fb8c635f3a89e51efa9cc: Status 404 returned error can't find the container with id 56787b04ad5f895e38b00f04c9a2d6272158fa39928fb8c635f3a89e51efa9cc Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.483653 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n86m9"] Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.487165 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.501659 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n86m9"] Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.543777 4793 generic.go:334] "Generic (PLEG): container finished" podID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerID="03d7182b3051340f52ed153f5e837dfb73bc57981b992f44ece303d0905a3a03" exitCode=0 Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.543824 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerDied","Data":"03d7182b3051340f52ed153f5e837dfb73bc57981b992f44ece303d0905a3a03"} Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.543851 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerStarted","Data":"56787b04ad5f895e38b00f04c9a2d6272158fa39928fb8c635f3a89e51efa9cc"} Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.634878 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-catalog-content\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.634930 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6fjw\" (UniqueName: \"kubernetes.io/projected/04ec483e-c420-407c-8b77-d20160aac145-kube-api-access-t6fjw\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.635012 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-utilities\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.736819 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-catalog-content\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.736941 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6fjw\" (UniqueName: \"kubernetes.io/projected/04ec483e-c420-407c-8b77-d20160aac145-kube-api-access-t6fjw\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.737029 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-utilities\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.737593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-utilities\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.738091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-catalog-content\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.773433 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6fjw\" (UniqueName: \"kubernetes.io/projected/04ec483e-c420-407c-8b77-d20160aac145-kube-api-access-t6fjw\") pod \"community-operators-n86m9\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:12 crc kubenswrapper[4793]: I0217 21:30:12.840146 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:13 crc kubenswrapper[4793]: I0217 21:30:13.389568 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n86m9"] Feb 17 21:30:13 crc kubenswrapper[4793]: I0217 21:30:13.563474 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerStarted","Data":"c89ce00dd3987fe7f019bd1a43e05d588293c143f3e7a18b8f81b032db259fe1"} Feb 17 21:30:13 crc kubenswrapper[4793]: I0217 21:30:13.564822 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerStarted","Data":"27d50cc18c41992584981898b6ead286dcba4a87948f543b29873cbfa766306e"} Feb 17 21:30:14 crc kubenswrapper[4793]: I0217 21:30:14.582534 4793 generic.go:334] "Generic (PLEG): container finished" podID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerID="c89ce00dd3987fe7f019bd1a43e05d588293c143f3e7a18b8f81b032db259fe1" exitCode=0 Feb 17 21:30:14 crc kubenswrapper[4793]: I0217 21:30:14.582655 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerDied","Data":"c89ce00dd3987fe7f019bd1a43e05d588293c143f3e7a18b8f81b032db259fe1"} Feb 17 21:30:14 crc kubenswrapper[4793]: I0217 21:30:14.585300 4793 generic.go:334] "Generic (PLEG): container finished" podID="04ec483e-c420-407c-8b77-d20160aac145" containerID="0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f" exitCode=0 Feb 17 21:30:14 crc kubenswrapper[4793]: I0217 21:30:14.585354 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerDied","Data":"0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f"} Feb 17 21:30:15 crc kubenswrapper[4793]: I0217 21:30:15.599152 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerStarted","Data":"ee4edca59cc55f30bf9d406c5bd58c7d5a275bb6969fbcd98a9da640102ddd96"} Feb 17 21:30:15 crc kubenswrapper[4793]: I0217 21:30:15.632821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qrl59" podStartSLOduration=2.2022970060000002 podStartE2EDuration="4.632792882s" podCreationTimestamp="2026-02-17 21:30:11 +0000 UTC" firstStartedPulling="2026-02-17 21:30:12.545496492 +0000 UTC m=+4887.837194803" lastFinishedPulling="2026-02-17 21:30:14.975992328 +0000 UTC m=+4890.267690679" observedRunningTime="2026-02-17 21:30:15.61687958 +0000 UTC m=+4890.908577911" watchObservedRunningTime="2026-02-17 21:30:15.632792882 +0000 UTC m=+4890.924491193" Feb 17 21:30:19 crc kubenswrapper[4793]: I0217 21:30:19.539123 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:30:19 crc kubenswrapper[4793]: E0217 21:30:19.540152 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:30:20 crc kubenswrapper[4793]: I0217 21:30:20.102355 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:30:20 crc kubenswrapper[4793]: I0217 21:30:20.102447 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:30:21 crc kubenswrapper[4793]: I0217 21:30:21.714489 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:21 crc kubenswrapper[4793]: I0217 21:30:21.715101 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:21 crc kubenswrapper[4793]: I0217 21:30:21.804742 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:22 crc kubenswrapper[4793]: I0217 21:30:22.769681 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:22 crc kubenswrapper[4793]: I0217 21:30:22.853060 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrl59"] Feb 17 21:30:23 crc kubenswrapper[4793]: I0217 21:30:23.706818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerStarted","Data":"c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d"} Feb 17 21:30:24 crc kubenswrapper[4793]: I0217 21:30:24.719087 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qrl59" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="registry-server" containerID="cri-o://ee4edca59cc55f30bf9d406c5bd58c7d5a275bb6969fbcd98a9da640102ddd96" gracePeriod=2 Feb 17 21:30:24 crc kubenswrapper[4793]: I0217 21:30:24.799309 4793 scope.go:117] "RemoveContainer" containerID="262313c8d9ebefd12e57e663586dc3f404e418b30974760bdbbc2272a6570d13" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.731993 4793 generic.go:334] "Generic (PLEG): container finished" podID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerID="ee4edca59cc55f30bf9d406c5bd58c7d5a275bb6969fbcd98a9da640102ddd96" exitCode=0 Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.732553 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerDied","Data":"ee4edca59cc55f30bf9d406c5bd58c7d5a275bb6969fbcd98a9da640102ddd96"} Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.732586 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrl59" event={"ID":"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e","Type":"ContainerDied","Data":"56787b04ad5f895e38b00f04c9a2d6272158fa39928fb8c635f3a89e51efa9cc"} Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.732601 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56787b04ad5f895e38b00f04c9a2d6272158fa39928fb8c635f3a89e51efa9cc" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.735030 4793 generic.go:334] "Generic (PLEG): container finished" podID="04ec483e-c420-407c-8b77-d20160aac145" containerID="c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d" exitCode=0 Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.735063 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerDied","Data":"c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d"} Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.779908 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.875123 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5xvb\" (UniqueName: \"kubernetes.io/projected/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-kube-api-access-s5xvb\") pod \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.875338 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-utilities\") pod \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.875499 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-catalog-content\") pod \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\" (UID: \"d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e\") " Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.877078 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-utilities" (OuterVolumeSpecName: "utilities") pod "d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" (UID: "d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.880897 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-kube-api-access-s5xvb" (OuterVolumeSpecName: "kube-api-access-s5xvb") pod "d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" (UID: "d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e"). InnerVolumeSpecName "kube-api-access-s5xvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.935504 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" (UID: "d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.978799 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5xvb\" (UniqueName: \"kubernetes.io/projected/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-kube-api-access-s5xvb\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.978843 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:25 crc kubenswrapper[4793]: I0217 21:30:25.978858 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:26 crc kubenswrapper[4793]: I0217 21:30:26.749588 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrl59" Feb 17 21:30:26 crc kubenswrapper[4793]: I0217 21:30:26.749581 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerStarted","Data":"0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49"} Feb 17 21:30:26 crc kubenswrapper[4793]: I0217 21:30:26.796053 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n86m9" podStartSLOduration=3.122096491 podStartE2EDuration="14.79601591s" podCreationTimestamp="2026-02-17 21:30:12 +0000 UTC" firstStartedPulling="2026-02-17 21:30:14.587278088 +0000 UTC m=+4889.878976409" lastFinishedPulling="2026-02-17 21:30:26.261197487 +0000 UTC m=+4901.552895828" observedRunningTime="2026-02-17 21:30:26.783340628 +0000 UTC m=+4902.075038929" watchObservedRunningTime="2026-02-17 21:30:26.79601591 +0000 UTC m=+4902.087714261" Feb 17 21:30:26 crc kubenswrapper[4793]: I0217 21:30:26.813896 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrl59"] Feb 17 21:30:26 crc kubenswrapper[4793]: I0217 21:30:26.823715 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qrl59"] Feb 17 21:30:27 crc kubenswrapper[4793]: I0217 21:30:27.555905 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" path="/var/lib/kubelet/pods/d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e/volumes" Feb 17 21:30:32 crc kubenswrapper[4793]: I0217 21:30:32.841220 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:32 crc kubenswrapper[4793]: I0217 21:30:32.842218 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:32 crc kubenswrapper[4793]: I0217 21:30:32.931490 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:33 crc kubenswrapper[4793]: I0217 21:30:33.539500 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:30:33 crc kubenswrapper[4793]: E0217 21:30:33.540144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:30:33 crc kubenswrapper[4793]: I0217 21:30:33.897706 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:33 crc kubenswrapper[4793]: I0217 21:30:33.970387 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n86m9"] Feb 17 21:30:35 crc kubenswrapper[4793]: I0217 21:30:35.849728 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n86m9" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="registry-server" containerID="cri-o://0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49" gracePeriod=2 Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.428193 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.523072 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6fjw\" (UniqueName: \"kubernetes.io/projected/04ec483e-c420-407c-8b77-d20160aac145-kube-api-access-t6fjw\") pod \"04ec483e-c420-407c-8b77-d20160aac145\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.525239 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-catalog-content\") pod \"04ec483e-c420-407c-8b77-d20160aac145\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.525397 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-utilities\") pod \"04ec483e-c420-407c-8b77-d20160aac145\" (UID: \"04ec483e-c420-407c-8b77-d20160aac145\") " Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.526127 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-utilities" (OuterVolumeSpecName: "utilities") pod "04ec483e-c420-407c-8b77-d20160aac145" (UID: "04ec483e-c420-407c-8b77-d20160aac145"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.526310 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.534162 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ec483e-c420-407c-8b77-d20160aac145-kube-api-access-t6fjw" (OuterVolumeSpecName: "kube-api-access-t6fjw") pod "04ec483e-c420-407c-8b77-d20160aac145" (UID: "04ec483e-c420-407c-8b77-d20160aac145"). InnerVolumeSpecName "kube-api-access-t6fjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.576811 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04ec483e-c420-407c-8b77-d20160aac145" (UID: "04ec483e-c420-407c-8b77-d20160aac145"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.627821 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6fjw\" (UniqueName: \"kubernetes.io/projected/04ec483e-c420-407c-8b77-d20160aac145-kube-api-access-t6fjw\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.627859 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ec483e-c420-407c-8b77-d20160aac145-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.865680 4793 generic.go:334] "Generic (PLEG): container finished" podID="04ec483e-c420-407c-8b77-d20160aac145" containerID="0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49" exitCode=0 Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.865794 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n86m9" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.865787 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerDied","Data":"0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49"} Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.865964 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n86m9" event={"ID":"04ec483e-c420-407c-8b77-d20160aac145","Type":"ContainerDied","Data":"27d50cc18c41992584981898b6ead286dcba4a87948f543b29873cbfa766306e"} Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.865996 4793 scope.go:117] "RemoveContainer" containerID="0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.924345 4793 scope.go:117] "RemoveContainer" containerID="c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d" Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.933767 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n86m9"] Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.952658 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n86m9"] Feb 17 21:30:36 crc kubenswrapper[4793]: I0217 21:30:36.964630 4793 scope.go:117] "RemoveContainer" containerID="0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.027062 4793 scope.go:117] "RemoveContainer" containerID="0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49" Feb 17 21:30:37 crc kubenswrapper[4793]: E0217 21:30:37.027496 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49\": container with ID starting with 0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49 not found: ID does not exist" containerID="0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.027536 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49"} err="failed to get container status \"0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49\": rpc error: code = NotFound desc = could not find container \"0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49\": container with ID starting with 0125371afb28fa8ff0a4b20bee62163fc1df0346f596b704ed031fa9a8ad5f49 not found: ID does not exist" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.027561 4793 scope.go:117] "RemoveContainer" containerID="c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d" Feb 17 21:30:37 crc kubenswrapper[4793]: E0217 21:30:37.028016 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d\": container with ID starting with c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d not found: ID does not exist" containerID="c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.028065 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d"} err="failed to get container status \"c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d\": rpc error: code = NotFound desc = could not find container \"c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d\": container with ID starting with c78ee70b9fa9b4336823c0564651e25b32bad166d5c919986a47cf97f013058d not found: ID does not exist" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.028104 4793 scope.go:117] "RemoveContainer" containerID="0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f" Feb 17 21:30:37 crc kubenswrapper[4793]: E0217 21:30:37.028436 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f\": container with ID starting with 0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f not found: ID does not exist" containerID="0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.028469 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f"} err="failed to get container status \"0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f\": rpc error: code = NotFound desc = could not find container \"0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f\": container with ID starting with 0dcda997338071462aaef48c557d4ba1d6b039b6b0a5451f1d54b00495ed035f not found: ID does not exist" Feb 17 21:30:37 crc kubenswrapper[4793]: I0217 21:30:37.560683 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ec483e-c420-407c-8b77-d20160aac145" path="/var/lib/kubelet/pods/04ec483e-c420-407c-8b77-d20160aac145/volumes" Feb 17 21:30:45 crc kubenswrapper[4793]: I0217 21:30:45.553289 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:30:45 crc kubenswrapper[4793]: I0217 21:30:45.980112 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903"} Feb 17 21:30:49 crc kubenswrapper[4793]: I0217 21:30:49.024013 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" exitCode=1 Feb 17 21:30:49 crc kubenswrapper[4793]: I0217 21:30:49.024142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903"} Feb 17 21:30:49 crc kubenswrapper[4793]: I0217 21:30:49.024866 4793 scope.go:117] "RemoveContainer" containerID="3cc34bf417065de597276980b5273c10395d15f1c4b098242c69813cc8e01653" Feb 17 21:30:49 crc kubenswrapper[4793]: I0217 21:30:49.025529 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:30:49 crc kubenswrapper[4793]: E0217 21:30:49.026158 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.101571 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.102005 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.102070 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.103121 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.103220 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" gracePeriod=600 Feb 17 21:30:50 crc kubenswrapper[4793]: E0217 21:30:50.229738 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:30:50 crc kubenswrapper[4793]: E0217 21:30:50.322676 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a786034_a3c6_4693_965a_3bd39bce6caa.slice/crio-d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a786034_a3c6_4693_965a_3bd39bce6caa.slice/crio-conmon-d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611.scope\": RecentStats: unable to find data in memory cache]" Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.595877 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:30:50 crc kubenswrapper[4793]: I0217 21:30:50.597336 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:30:50 crc kubenswrapper[4793]: E0217 21:30:50.597809 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:30:51 crc kubenswrapper[4793]: I0217 21:30:51.058441 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" exitCode=0 Feb 17 21:30:51 crc kubenswrapper[4793]: I0217 21:30:51.058563 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611"} Feb 17 21:30:51 crc kubenswrapper[4793]: I0217 21:30:51.059199 4793 scope.go:117] "RemoveContainer" containerID="35d167af5fbd2098becad4dda365d78b9584628a998e38b902b940db6917fb20" Feb 17 21:30:51 crc kubenswrapper[4793]: I0217 21:30:51.060094 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:30:51 crc kubenswrapper[4793]: E0217 21:30:51.060599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:30:55 crc kubenswrapper[4793]: I0217 21:30:55.596488 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:30:55 crc kubenswrapper[4793]: I0217 21:30:55.597148 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:30:55 crc kubenswrapper[4793]: I0217 21:30:55.597160 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:30:55 crc kubenswrapper[4793]: I0217 21:30:55.597987 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:30:55 crc kubenswrapper[4793]: E0217 21:30:55.598217 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:31:03 crc kubenswrapper[4793]: I0217 21:31:03.539551 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:31:03 crc kubenswrapper[4793]: E0217 21:31:03.540798 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.539421 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.540374 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.742665 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ncbt6"] Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.743364 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="extract-utilities" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743391 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="extract-utilities" Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.743405 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="extract-content" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743413 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="extract-content" Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.743431 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="registry-server" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743438 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="registry-server" Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.743450 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="registry-server" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743458 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="registry-server" Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.743477 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="extract-content" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743483 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="extract-content" Feb 17 21:31:06 crc kubenswrapper[4793]: E0217 21:31:06.743509 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="extract-utilities" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743516 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="extract-utilities" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743768 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec483e-c420-407c-8b77-d20160aac145" containerName="registry-server" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.743800 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d23dbf-7e58-4dcb-a535-b7aa5ce7889e" containerName="registry-server" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.746577 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.778493 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncbt6"] Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.796539 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-catalog-content\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.796615 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf2hx\" (UniqueName: \"kubernetes.io/projected/532b2978-ad49-4ede-b633-309c5ddf6fc0-kube-api-access-mf2hx\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.796740 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-utilities\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.898488 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-catalog-content\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.898553 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf2hx\" (UniqueName: \"kubernetes.io/projected/532b2978-ad49-4ede-b633-309c5ddf6fc0-kube-api-access-mf2hx\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.898673 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-utilities\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.899162 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-utilities\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.899375 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-catalog-content\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:06 crc kubenswrapper[4793]: I0217 21:31:06.928468 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf2hx\" (UniqueName: \"kubernetes.io/projected/532b2978-ad49-4ede-b633-309c5ddf6fc0-kube-api-access-mf2hx\") pod \"redhat-operators-ncbt6\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:07 crc kubenswrapper[4793]: I0217 21:31:07.104469 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:07 crc kubenswrapper[4793]: I0217 21:31:07.597575 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncbt6"] Feb 17 21:31:08 crc kubenswrapper[4793]: I0217 21:31:08.505245 4793 generic.go:334] "Generic (PLEG): container finished" podID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerID="733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a" exitCode=0 Feb 17 21:31:08 crc kubenswrapper[4793]: I0217 21:31:08.505358 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerDied","Data":"733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a"} Feb 17 21:31:08 crc kubenswrapper[4793]: I0217 21:31:08.505647 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerStarted","Data":"d16ff7198d6b71a0b8ee4bdbcfcfe0665586bd3c11b7bb10b2f3146b431461ce"} Feb 17 21:31:08 crc kubenswrapper[4793]: I0217 21:31:08.509264 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:31:10 crc kubenswrapper[4793]: I0217 21:31:10.531015 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerStarted","Data":"827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475"} Feb 17 21:31:13 crc kubenswrapper[4793]: I0217 21:31:13.569477 4793 generic.go:334] "Generic (PLEG): container finished" podID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerID="827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475" exitCode=0 Feb 17 21:31:13 crc kubenswrapper[4793]: I0217 21:31:13.570242 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerDied","Data":"827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475"} Feb 17 21:31:14 crc kubenswrapper[4793]: I0217 21:31:14.585395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerStarted","Data":"0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6"} Feb 17 21:31:14 crc kubenswrapper[4793]: I0217 21:31:14.611088 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ncbt6" podStartSLOduration=3.143721034 podStartE2EDuration="8.611073278s" podCreationTimestamp="2026-02-17 21:31:06 +0000 UTC" firstStartedPulling="2026-02-17 21:31:08.508873771 +0000 UTC m=+4943.800572122" lastFinishedPulling="2026-02-17 21:31:13.976226015 +0000 UTC m=+4949.267924366" observedRunningTime="2026-02-17 21:31:14.605174533 +0000 UTC m=+4949.896872834" watchObservedRunningTime="2026-02-17 21:31:14.611073278 +0000 UTC m=+4949.902771589" Feb 17 21:31:17 crc kubenswrapper[4793]: I0217 21:31:17.105102 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:17 crc kubenswrapper[4793]: I0217 21:31:17.105455 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:17 crc kubenswrapper[4793]: I0217 21:31:17.539987 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:31:17 crc kubenswrapper[4793]: E0217 21:31:17.540714 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:31:18 crc kubenswrapper[4793]: I0217 21:31:18.160834 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbt6" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="registry-server" probeResult="failure" output=< Feb 17 21:31:18 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:31:18 crc kubenswrapper[4793]: > Feb 17 21:31:18 crc kubenswrapper[4793]: I0217 21:31:18.540086 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:31:18 crc kubenswrapper[4793]: E0217 21:31:18.540628 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:31:27 crc kubenswrapper[4793]: I0217 21:31:27.178707 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:27 crc kubenswrapper[4793]: I0217 21:31:27.242491 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:27 crc kubenswrapper[4793]: I0217 21:31:27.430209 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncbt6"] Feb 17 21:31:28 crc kubenswrapper[4793]: I0217 21:31:28.791861 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ncbt6" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="registry-server" containerID="cri-o://0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6" gracePeriod=2 Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.419917 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.479495 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf2hx\" (UniqueName: \"kubernetes.io/projected/532b2978-ad49-4ede-b633-309c5ddf6fc0-kube-api-access-mf2hx\") pod \"532b2978-ad49-4ede-b633-309c5ddf6fc0\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.480120 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-utilities\") pod \"532b2978-ad49-4ede-b633-309c5ddf6fc0\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.480318 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-catalog-content\") pod \"532b2978-ad49-4ede-b633-309c5ddf6fc0\" (UID: \"532b2978-ad49-4ede-b633-309c5ddf6fc0\") " Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.483059 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-utilities" (OuterVolumeSpecName: "utilities") pod "532b2978-ad49-4ede-b633-309c5ddf6fc0" (UID: "532b2978-ad49-4ede-b633-309c5ddf6fc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.507959 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532b2978-ad49-4ede-b633-309c5ddf6fc0-kube-api-access-mf2hx" (OuterVolumeSpecName: "kube-api-access-mf2hx") pod "532b2978-ad49-4ede-b633-309c5ddf6fc0" (UID: "532b2978-ad49-4ede-b633-309c5ddf6fc0"). InnerVolumeSpecName "kube-api-access-mf2hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.585367 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf2hx\" (UniqueName: \"kubernetes.io/projected/532b2978-ad49-4ede-b633-309c5ddf6fc0-kube-api-access-mf2hx\") on node \"crc\" DevicePath \"\"" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.585713 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.620424 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "532b2978-ad49-4ede-b633-309c5ddf6fc0" (UID: "532b2978-ad49-4ede-b633-309c5ddf6fc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.689426 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532b2978-ad49-4ede-b633-309c5ddf6fc0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.805580 4793 generic.go:334] "Generic (PLEG): container finished" podID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerID="0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6" exitCode=0 Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.805701 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerDied","Data":"0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6"} Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.806834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbt6" event={"ID":"532b2978-ad49-4ede-b633-309c5ddf6fc0","Type":"ContainerDied","Data":"d16ff7198d6b71a0b8ee4bdbcfcfe0665586bd3c11b7bb10b2f3146b431461ce"} Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.806868 4793 scope.go:117] "RemoveContainer" containerID="0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.805743 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbt6" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.850719 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncbt6"] Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.851342 4793 scope.go:117] "RemoveContainer" containerID="827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.859699 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ncbt6"] Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.872508 4793 scope.go:117] "RemoveContainer" containerID="733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.955940 4793 scope.go:117] "RemoveContainer" containerID="0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6" Feb 17 21:31:29 crc kubenswrapper[4793]: E0217 21:31:29.956315 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6\": container with ID starting with 0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6 not found: ID does not exist" containerID="0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.956355 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6"} err="failed to get container status \"0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6\": rpc error: code = NotFound desc = could not find container \"0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6\": container with ID starting with 0f9bb3e11725437e8aef394ec60f05e279653b1094fd88949656d222684a22b6 not found: ID does not exist" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.956384 4793 scope.go:117] "RemoveContainer" containerID="827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475" Feb 17 21:31:29 crc kubenswrapper[4793]: E0217 21:31:29.956639 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475\": container with ID starting with 827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475 not found: ID does not exist" containerID="827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.956666 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475"} err="failed to get container status \"827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475\": rpc error: code = NotFound desc = could not find container \"827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475\": container with ID starting with 827829ffd7869fc5a4ccb00e74fd10ae75c6238c000e6584a6dd895d58306475 not found: ID does not exist" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.956701 4793 scope.go:117] "RemoveContainer" containerID="733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a" Feb 17 21:31:29 crc kubenswrapper[4793]: E0217 21:31:29.956884 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a\": container with ID starting with 733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a not found: ID does not exist" containerID="733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a" Feb 17 21:31:29 crc kubenswrapper[4793]: I0217 21:31:29.956903 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a"} err="failed to get container status \"733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a\": rpc error: code = NotFound desc = could not find container \"733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a\": container with ID starting with 733f286b1da862b55ee807775c5a586aa7c23c07ac7363d8f0fc14edc0c2be4a not found: ID does not exist" Feb 17 21:31:31 crc kubenswrapper[4793]: I0217 21:31:31.539547 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:31:31 crc kubenswrapper[4793]: E0217 21:31:31.540443 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:31:31 crc kubenswrapper[4793]: I0217 21:31:31.556917 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" path="/var/lib/kubelet/pods/532b2978-ad49-4ede-b633-309c5ddf6fc0/volumes" Feb 17 21:31:32 crc kubenswrapper[4793]: I0217 21:31:32.539286 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:31:32 crc kubenswrapper[4793]: E0217 21:31:32.540126 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:31:46 crc kubenswrapper[4793]: I0217 21:31:46.539784 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:31:46 crc kubenswrapper[4793]: I0217 21:31:46.540437 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:31:46 crc kubenswrapper[4793]: E0217 21:31:46.540852 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:31:46 crc kubenswrapper[4793]: E0217 21:31:46.540987 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:31:57 crc kubenswrapper[4793]: I0217 21:31:57.538526 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:31:57 crc kubenswrapper[4793]: E0217 21:31:57.539465 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:32:01 crc kubenswrapper[4793]: I0217 21:32:01.539767 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:32:01 crc kubenswrapper[4793]: E0217 21:32:01.541106 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:32:10 crc kubenswrapper[4793]: I0217 21:32:10.539989 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:32:10 crc kubenswrapper[4793]: E0217 21:32:10.540876 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:32:13 crc kubenswrapper[4793]: I0217 21:32:13.540209 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:32:13 crc kubenswrapper[4793]: E0217 21:32:13.541332 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:32:25 crc kubenswrapper[4793]: I0217 21:32:25.545137 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:32:25 crc kubenswrapper[4793]: E0217 21:32:25.546596 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:32:26 crc kubenswrapper[4793]: I0217 21:32:26.539278 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:32:26 crc kubenswrapper[4793]: E0217 21:32:26.539555 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:32:36 crc kubenswrapper[4793]: I0217 21:32:36.539046 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:32:36 crc kubenswrapper[4793]: E0217 21:32:36.539820 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:32:39 crc kubenswrapper[4793]: I0217 21:32:39.539998 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:32:39 crc kubenswrapper[4793]: E0217 21:32:39.541150 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:32:47 crc kubenswrapper[4793]: I0217 21:32:47.539970 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:32:47 crc kubenswrapper[4793]: E0217 21:32:47.541159 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:32:50 crc kubenswrapper[4793]: I0217 21:32:50.539306 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:32:50 crc kubenswrapper[4793]: E0217 21:32:50.539929 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:33:01 crc kubenswrapper[4793]: I0217 21:33:01.540201 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:33:01 crc kubenswrapper[4793]: E0217 21:33:01.541682 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:33:02 crc kubenswrapper[4793]: I0217 21:33:02.539591 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:33:02 crc kubenswrapper[4793]: E0217 21:33:02.540112 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:33:12 crc kubenswrapper[4793]: I0217 21:33:12.539418 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:33:12 crc kubenswrapper[4793]: E0217 21:33:12.540597 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:33:13 crc kubenswrapper[4793]: I0217 21:33:13.539494 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:33:13 crc kubenswrapper[4793]: E0217 21:33:13.540147 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:33:23 crc kubenswrapper[4793]: I0217 21:33:23.539535 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:33:23 crc kubenswrapper[4793]: E0217 21:33:23.540362 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:33:28 crc kubenswrapper[4793]: I0217 21:33:28.539131 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:33:28 crc kubenswrapper[4793]: E0217 21:33:28.540206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:33:37 crc kubenswrapper[4793]: I0217 21:33:37.539829 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:33:37 crc kubenswrapper[4793]: E0217 21:33:37.541117 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:33:43 crc kubenswrapper[4793]: I0217 21:33:43.538563 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:33:43 crc kubenswrapper[4793]: E0217 21:33:43.539366 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:33:52 crc kubenswrapper[4793]: I0217 21:33:52.749177 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:33:52 crc kubenswrapper[4793]: E0217 21:33:52.750089 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:33:55 crc kubenswrapper[4793]: I0217 21:33:55.544416 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:33:55 crc kubenswrapper[4793]: E0217 21:33:55.545294 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:34:07 crc kubenswrapper[4793]: I0217 21:34:07.546473 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:34:07 crc kubenswrapper[4793]: I0217 21:34:07.547238 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:34:07 crc kubenswrapper[4793]: E0217 21:34:07.547479 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:34:07 crc kubenswrapper[4793]: E0217 21:34:07.547585 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:34:18 crc kubenswrapper[4793]: I0217 21:34:18.539644 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:34:18 crc kubenswrapper[4793]: E0217 21:34:18.540507 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:34:19 crc kubenswrapper[4793]: I0217 21:34:19.539179 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:34:19 crc kubenswrapper[4793]: E0217 21:34:19.539736 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:34:30 crc kubenswrapper[4793]: I0217 21:34:30.539236 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:34:30 crc kubenswrapper[4793]: E0217 21:34:30.540107 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:34:33 crc kubenswrapper[4793]: I0217 21:34:33.540807 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:34:33 crc kubenswrapper[4793]: E0217 21:34:33.541394 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:34:41 crc kubenswrapper[4793]: I0217 21:34:41.539673 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:34:41 crc kubenswrapper[4793]: E0217 21:34:41.541206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:34:47 crc kubenswrapper[4793]: I0217 21:34:47.539064 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:34:47 crc kubenswrapper[4793]: E0217 21:34:47.540193 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:34:54 crc kubenswrapper[4793]: I0217 21:34:54.539326 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:34:54 crc kubenswrapper[4793]: E0217 21:34:54.540305 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:35:01 crc kubenswrapper[4793]: I0217 21:35:01.540086 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:35:01 crc kubenswrapper[4793]: E0217 21:35:01.550629 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:35:07 crc kubenswrapper[4793]: I0217 21:35:07.538805 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:35:07 crc kubenswrapper[4793]: E0217 21:35:07.539867 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:35:13 crc kubenswrapper[4793]: I0217 21:35:13.539678 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:35:13 crc kubenswrapper[4793]: E0217 21:35:13.541086 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:35:21 crc kubenswrapper[4793]: I0217 21:35:21.539516 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:35:21 crc kubenswrapper[4793]: E0217 21:35:21.540180 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:35:28 crc kubenswrapper[4793]: I0217 21:35:28.538779 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:35:28 crc kubenswrapper[4793]: E0217 21:35:28.540039 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:35:34 crc kubenswrapper[4793]: I0217 21:35:34.539957 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:35:34 crc kubenswrapper[4793]: E0217 21:35:34.541096 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:35:41 crc kubenswrapper[4793]: I0217 21:35:41.539216 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:35:41 crc kubenswrapper[4793]: E0217 21:35:41.540482 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:35:48 crc kubenswrapper[4793]: I0217 21:35:48.539063 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:35:48 crc kubenswrapper[4793]: E0217 21:35:48.540182 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:35:55 crc kubenswrapper[4793]: I0217 21:35:55.546608 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:35:56 crc kubenswrapper[4793]: I0217 21:35:56.721230 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e"} Feb 17 21:35:58 crc kubenswrapper[4793]: I0217 21:35:58.743794 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" exitCode=1 Feb 17 21:35:58 crc kubenswrapper[4793]: I0217 21:35:58.743875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e"} Feb 17 21:35:58 crc kubenswrapper[4793]: I0217 21:35:58.744336 4793 scope.go:117] "RemoveContainer" containerID="f5e452fb56f22266ed0f1888f1a309f231131b9275d7b84f2168957c12cd7903" Feb 17 21:35:58 crc kubenswrapper[4793]: I0217 21:35:58.745875 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:35:58 crc kubenswrapper[4793]: E0217 21:35:58.746757 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.596520 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.598313 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:36:00 crc kubenswrapper[4793]: E0217 21:36:00.599008 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.855642 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k25dp"] Feb 17 21:36:00 crc kubenswrapper[4793]: E0217 21:36:00.856370 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="extract-content" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.856399 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="extract-content" Feb 17 21:36:00 crc kubenswrapper[4793]: E0217 21:36:00.856448 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="extract-utilities" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.856463 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="extract-utilities" Feb 17 21:36:00 crc kubenswrapper[4793]: E0217 21:36:00.856497 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="registry-server" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.856509 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="registry-server" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.856906 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="532b2978-ad49-4ede-b633-309c5ddf6fc0" containerName="registry-server" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.859546 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.871925 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k25dp"] Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.941219 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjrz\" (UniqueName: \"kubernetes.io/projected/f287bc2f-c4ae-4063-adc7-35eec120344d-kube-api-access-xsjrz\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.941538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-utilities\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:00 crc kubenswrapper[4793]: I0217 21:36:00.941703 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-catalog-content\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.044361 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjrz\" (UniqueName: \"kubernetes.io/projected/f287bc2f-c4ae-4063-adc7-35eec120344d-kube-api-access-xsjrz\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.044436 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-utilities\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.044525 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-catalog-content\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.045131 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-catalog-content\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.045465 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-utilities\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.571201 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjrz\" (UniqueName: \"kubernetes.io/projected/f287bc2f-c4ae-4063-adc7-35eec120344d-kube-api-access-xsjrz\") pod \"redhat-marketplace-k25dp\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:01 crc kubenswrapper[4793]: I0217 21:36:01.786623 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:02 crc kubenswrapper[4793]: I0217 21:36:02.284782 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k25dp"] Feb 17 21:36:02 crc kubenswrapper[4793]: I0217 21:36:02.538884 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:36:02 crc kubenswrapper[4793]: I0217 21:36:02.798133 4793 generic.go:334] "Generic (PLEG): container finished" podID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerID="48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc" exitCode=0 Feb 17 21:36:02 crc kubenswrapper[4793]: I0217 21:36:02.799797 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerDied","Data":"48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc"} Feb 17 21:36:02 crc kubenswrapper[4793]: I0217 21:36:02.799834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerStarted","Data":"79741b46d02236f27e0fe4663e23adc2765d36ae9dd85631e9299c03d65a9e16"} Feb 17 21:36:03 crc kubenswrapper[4793]: I0217 21:36:03.814476 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"3e8a8dab73fd74db859b33d53c345bd888043d3a6b4c0f585517de04e6dde304"} Feb 17 21:36:04 crc kubenswrapper[4793]: I0217 21:36:04.832332 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerStarted","Data":"9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630"} Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.595900 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.596270 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.596281 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.597059 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:36:05 crc kubenswrapper[4793]: E0217 21:36:05.597341 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.843124 4793 generic.go:334] "Generic (PLEG): container finished" podID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerID="9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630" exitCode=0 Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.843253 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerDied","Data":"9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630"} Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.843533 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerStarted","Data":"16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc"} Feb 17 21:36:05 crc kubenswrapper[4793]: I0217 21:36:05.867868 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k25dp" podStartSLOduration=3.3916941769999998 podStartE2EDuration="5.867850046s" podCreationTimestamp="2026-02-17 21:36:00 +0000 UTC" firstStartedPulling="2026-02-17 21:36:02.800580009 +0000 UTC m=+5238.092278330" lastFinishedPulling="2026-02-17 21:36:05.276735888 +0000 UTC m=+5240.568434199" observedRunningTime="2026-02-17 21:36:05.857588433 +0000 UTC m=+5241.149286754" watchObservedRunningTime="2026-02-17 21:36:05.867850046 +0000 UTC m=+5241.159548347" Feb 17 21:36:11 crc kubenswrapper[4793]: I0217 21:36:11.787450 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:11 crc kubenswrapper[4793]: I0217 21:36:11.788014 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:11 crc kubenswrapper[4793]: I0217 21:36:11.861336 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:11 crc kubenswrapper[4793]: I0217 21:36:11.965240 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:12 crc kubenswrapper[4793]: I0217 21:36:12.115907 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k25dp"] Feb 17 21:36:13 crc kubenswrapper[4793]: I0217 21:36:13.918934 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k25dp" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="registry-server" containerID="cri-o://16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc" gracePeriod=2 Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.432882 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.555523 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-utilities\") pod \"f287bc2f-c4ae-4063-adc7-35eec120344d\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.555602 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-catalog-content\") pod \"f287bc2f-c4ae-4063-adc7-35eec120344d\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.555653 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjrz\" (UniqueName: \"kubernetes.io/projected/f287bc2f-c4ae-4063-adc7-35eec120344d-kube-api-access-xsjrz\") pod \"f287bc2f-c4ae-4063-adc7-35eec120344d\" (UID: \"f287bc2f-c4ae-4063-adc7-35eec120344d\") " Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.556847 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-utilities" (OuterVolumeSpecName: "utilities") pod "f287bc2f-c4ae-4063-adc7-35eec120344d" (UID: "f287bc2f-c4ae-4063-adc7-35eec120344d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.561769 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f287bc2f-c4ae-4063-adc7-35eec120344d-kube-api-access-xsjrz" (OuterVolumeSpecName: "kube-api-access-xsjrz") pod "f287bc2f-c4ae-4063-adc7-35eec120344d" (UID: "f287bc2f-c4ae-4063-adc7-35eec120344d"). InnerVolumeSpecName "kube-api-access-xsjrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.583267 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f287bc2f-c4ae-4063-adc7-35eec120344d" (UID: "f287bc2f-c4ae-4063-adc7-35eec120344d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.659535 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.659572 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f287bc2f-c4ae-4063-adc7-35eec120344d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.659587 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjrz\" (UniqueName: \"kubernetes.io/projected/f287bc2f-c4ae-4063-adc7-35eec120344d-kube-api-access-xsjrz\") on node \"crc\" DevicePath \"\"" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.937447 4793 generic.go:334] "Generic (PLEG): container finished" podID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerID="16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc" exitCode=0 Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.937497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerDied","Data":"16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc"} Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.937527 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k25dp" event={"ID":"f287bc2f-c4ae-4063-adc7-35eec120344d","Type":"ContainerDied","Data":"79741b46d02236f27e0fe4663e23adc2765d36ae9dd85631e9299c03d65a9e16"} Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.937548 4793 scope.go:117] "RemoveContainer" containerID="16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.937634 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k25dp" Feb 17 21:36:14 crc kubenswrapper[4793]: I0217 21:36:14.981090 4793 scope.go:117] "RemoveContainer" containerID="9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.020750 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k25dp"] Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.037877 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k25dp"] Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.090761 4793 scope.go:117] "RemoveContainer" containerID="48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.555517 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" path="/var/lib/kubelet/pods/f287bc2f-c4ae-4063-adc7-35eec120344d/volumes" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.702595 4793 scope.go:117] "RemoveContainer" containerID="16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc" Feb 17 21:36:15 crc kubenswrapper[4793]: E0217 21:36:15.703263 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc\": container with ID starting with 16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc not found: ID does not exist" containerID="16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.703333 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc"} err="failed to get container status \"16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc\": rpc error: code = NotFound desc = could not find container \"16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc\": container with ID starting with 16036c8dba0e209d6d7b8581aa50de086431071d84cff73c9a4e6e3efa4e28fc not found: ID does not exist" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.703378 4793 scope.go:117] "RemoveContainer" containerID="9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630" Feb 17 21:36:15 crc kubenswrapper[4793]: E0217 21:36:15.704265 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630\": container with ID starting with 9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630 not found: ID does not exist" containerID="9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.704319 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630"} err="failed to get container status \"9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630\": rpc error: code = NotFound desc = could not find container \"9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630\": container with ID starting with 9f8c23ce733304c4e2505a7ed2975dbc84aa7f540e962e5bb906ed6ca37a6630 not found: ID does not exist" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.704355 4793 scope.go:117] "RemoveContainer" containerID="48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc" Feb 17 21:36:15 crc kubenswrapper[4793]: E0217 21:36:15.705560 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc\": container with ID starting with 48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc not found: ID does not exist" containerID="48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc" Feb 17 21:36:15 crc kubenswrapper[4793]: I0217 21:36:15.705647 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc"} err="failed to get container status \"48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc\": rpc error: code = NotFound desc = could not find container \"48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc\": container with ID starting with 48de68a57c496749af2e7fb37b70cecba3a1bc0b264c1b0c58f9bf417f8be4fc not found: ID does not exist" Feb 17 21:36:19 crc kubenswrapper[4793]: I0217 21:36:19.538571 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:36:19 crc kubenswrapper[4793]: E0217 21:36:19.539342 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:36:25 crc kubenswrapper[4793]: I0217 21:36:25.053951 4793 scope.go:117] "RemoveContainer" containerID="ee4edca59cc55f30bf9d406c5bd58c7d5a275bb6969fbcd98a9da640102ddd96" Feb 17 21:36:25 crc kubenswrapper[4793]: I0217 21:36:25.081955 4793 scope.go:117] "RemoveContainer" containerID="c89ce00dd3987fe7f019bd1a43e05d588293c143f3e7a18b8f81b032db259fe1" Feb 17 21:36:25 crc kubenswrapper[4793]: I0217 21:36:25.116064 4793 scope.go:117] "RemoveContainer" containerID="03d7182b3051340f52ed153f5e837dfb73bc57981b992f44ece303d0905a3a03" Feb 17 21:36:31 crc kubenswrapper[4793]: I0217 21:36:31.540356 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:36:31 crc kubenswrapper[4793]: E0217 21:36:31.542569 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:36:42 crc kubenswrapper[4793]: I0217 21:36:42.539127 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:36:42 crc kubenswrapper[4793]: E0217 21:36:42.540019 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:36:56 crc kubenswrapper[4793]: I0217 21:36:56.539135 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:36:56 crc kubenswrapper[4793]: E0217 21:36:56.540357 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:37:11 crc kubenswrapper[4793]: I0217 21:37:11.539905 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:37:11 crc kubenswrapper[4793]: E0217 21:37:11.541273 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:37:24 crc kubenswrapper[4793]: I0217 21:37:24.539539 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:37:24 crc kubenswrapper[4793]: E0217 21:37:24.540361 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:37:35 crc kubenswrapper[4793]: I0217 21:37:35.551855 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:37:35 crc kubenswrapper[4793]: E0217 21:37:35.553031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:37:47 crc kubenswrapper[4793]: I0217 21:37:47.539571 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:37:47 crc kubenswrapper[4793]: E0217 21:37:47.541014 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:38:00 crc kubenswrapper[4793]: I0217 21:38:00.539778 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:38:00 crc kubenswrapper[4793]: E0217 21:38:00.540666 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:38:14 crc kubenswrapper[4793]: I0217 21:38:14.539294 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:38:14 crc kubenswrapper[4793]: E0217 21:38:14.540250 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:38:20 crc kubenswrapper[4793]: I0217 21:38:20.102338 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:38:20 crc kubenswrapper[4793]: I0217 21:38:20.103250 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:38:26 crc kubenswrapper[4793]: I0217 21:38:26.539340 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:38:26 crc kubenswrapper[4793]: E0217 21:38:26.540672 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:38:37 crc kubenswrapper[4793]: I0217 21:38:37.538532 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:38:37 crc kubenswrapper[4793]: E0217 21:38:37.539317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:38:50 crc kubenswrapper[4793]: I0217 21:38:50.101625 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:38:50 crc kubenswrapper[4793]: I0217 21:38:50.102445 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:38:52 crc kubenswrapper[4793]: I0217 21:38:52.539399 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:38:52 crc kubenswrapper[4793]: E0217 21:38:52.540051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:39:06 crc kubenswrapper[4793]: I0217 21:39:06.539299 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:39:06 crc kubenswrapper[4793]: E0217 21:39:06.540465 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:39:20 crc kubenswrapper[4793]: I0217 21:39:20.102314 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:39:20 crc kubenswrapper[4793]: I0217 21:39:20.103020 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:39:20 crc kubenswrapper[4793]: I0217 21:39:20.103083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:39:20 crc kubenswrapper[4793]: I0217 21:39:20.104248 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e8a8dab73fd74db859b33d53c345bd888043d3a6b4c0f585517de04e6dde304"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:39:20 crc kubenswrapper[4793]: I0217 21:39:20.104373 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://3e8a8dab73fd74db859b33d53c345bd888043d3a6b4c0f585517de04e6dde304" gracePeriod=600 Feb 17 21:39:20 crc kubenswrapper[4793]: I0217 21:39:20.539113 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:39:20 crc kubenswrapper[4793]: E0217 21:39:20.539537 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:39:21 crc kubenswrapper[4793]: I0217 21:39:21.165329 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="3e8a8dab73fd74db859b33d53c345bd888043d3a6b4c0f585517de04e6dde304" exitCode=0 Feb 17 21:39:21 crc kubenswrapper[4793]: I0217 21:39:21.165405 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"3e8a8dab73fd74db859b33d53c345bd888043d3a6b4c0f585517de04e6dde304"} Feb 17 21:39:21 crc kubenswrapper[4793]: I0217 21:39:21.165848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582"} Feb 17 21:39:21 crc kubenswrapper[4793]: I0217 21:39:21.165884 4793 scope.go:117] "RemoveContainer" containerID="d4a52e1bde66a4cc3d896b629de1a100b583d1e1171b97b092613253d1f07611" Feb 17 21:39:33 crc kubenswrapper[4793]: I0217 21:39:33.538640 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:39:33 crc kubenswrapper[4793]: E0217 21:39:33.539530 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:39:44 crc kubenswrapper[4793]: I0217 21:39:44.538730 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:39:44 crc kubenswrapper[4793]: E0217 21:39:44.539440 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:39:55 crc kubenswrapper[4793]: I0217 21:39:55.547922 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:39:55 crc kubenswrapper[4793]: E0217 21:39:55.548742 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:40:06 crc kubenswrapper[4793]: I0217 21:40:06.538663 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:40:06 crc kubenswrapper[4793]: E0217 21:40:06.539503 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:40:19 crc kubenswrapper[4793]: I0217 21:40:19.560528 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:40:19 crc kubenswrapper[4793]: E0217 21:40:19.561296 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:40:33 crc kubenswrapper[4793]: I0217 21:40:33.539908 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:40:33 crc kubenswrapper[4793]: E0217 21:40:33.541030 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:40:44 crc kubenswrapper[4793]: I0217 21:40:44.539103 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:40:44 crc kubenswrapper[4793]: E0217 21:40:44.539923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:40:56 crc kubenswrapper[4793]: I0217 21:40:56.538624 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:40:56 crc kubenswrapper[4793]: E0217 21:40:56.539446 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.769178 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsnxk"] Feb 17 21:41:01 crc kubenswrapper[4793]: E0217 21:41:01.770414 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="extract-content" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.770435 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="extract-content" Feb 17 21:41:01 crc kubenswrapper[4793]: E0217 21:41:01.770450 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="registry-server" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.770459 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="registry-server" Feb 17 21:41:01 crc kubenswrapper[4793]: E0217 21:41:01.770508 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="extract-utilities" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.770517 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="extract-utilities" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.770773 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f287bc2f-c4ae-4063-adc7-35eec120344d" containerName="registry-server" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.772666 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.786036 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsnxk"] Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.893462 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-utilities\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.893863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-catalog-content\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.893995 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5mrp\" (UniqueName: \"kubernetes.io/projected/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-kube-api-access-v5mrp\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.996478 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-catalog-content\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.996953 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5mrp\" (UniqueName: \"kubernetes.io/projected/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-kube-api-access-v5mrp\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.997089 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-utilities\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.997144 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-catalog-content\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:01 crc kubenswrapper[4793]: I0217 21:41:01.997658 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-utilities\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:02 crc kubenswrapper[4793]: I0217 21:41:02.026909 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5mrp\" (UniqueName: \"kubernetes.io/projected/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-kube-api-access-v5mrp\") pod \"certified-operators-bsnxk\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:02 crc kubenswrapper[4793]: I0217 21:41:02.102232 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:02 crc kubenswrapper[4793]: I0217 21:41:02.646274 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsnxk"] Feb 17 21:41:03 crc kubenswrapper[4793]: I0217 21:41:03.357282 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerID="93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756" exitCode=0 Feb 17 21:41:03 crc kubenswrapper[4793]: I0217 21:41:03.357385 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsnxk" event={"ID":"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6","Type":"ContainerDied","Data":"93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756"} Feb 17 21:41:03 crc kubenswrapper[4793]: I0217 21:41:03.357617 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsnxk" event={"ID":"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6","Type":"ContainerStarted","Data":"e1580759c2132bd087e58c9f93bd628432664ca7fec70efaa470291daea44fc8"} Feb 17 21:41:03 crc kubenswrapper[4793]: I0217 21:41:03.360969 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:41:05 crc kubenswrapper[4793]: I0217 21:41:05.381103 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerID="2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae" exitCode=0 Feb 17 21:41:05 crc kubenswrapper[4793]: I0217 21:41:05.381179 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsnxk" event={"ID":"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6","Type":"ContainerDied","Data":"2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae"} Feb 17 21:41:06 crc kubenswrapper[4793]: I0217 21:41:06.414645 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsnxk" event={"ID":"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6","Type":"ContainerStarted","Data":"66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337"} Feb 17 21:41:06 crc kubenswrapper[4793]: I0217 21:41:06.448685 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsnxk" podStartSLOduration=2.951466557 podStartE2EDuration="5.448665551s" podCreationTimestamp="2026-02-17 21:41:01 +0000 UTC" firstStartedPulling="2026-02-17 21:41:03.360749882 +0000 UTC m=+5538.652448193" lastFinishedPulling="2026-02-17 21:41:05.857948866 +0000 UTC m=+5541.149647187" observedRunningTime="2026-02-17 21:41:06.440381257 +0000 UTC m=+5541.732079608" watchObservedRunningTime="2026-02-17 21:41:06.448665551 +0000 UTC m=+5541.740363872" Feb 17 21:41:09 crc kubenswrapper[4793]: I0217 21:41:09.545170 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:41:10 crc kubenswrapper[4793]: I0217 21:41:10.465153 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35"} Feb 17 21:41:10 crc kubenswrapper[4793]: I0217 21:41:10.596610 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.103104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.105396 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.169389 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.495732 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" exitCode=1 Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.496842 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35"} Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.496888 4793 scope.go:117] "RemoveContainer" containerID="3ea9526bc6f1ef719b601c1ff376096b39da8de1ff53cbbd4b8ae580a56d054e" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.498126 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:41:12 crc kubenswrapper[4793]: E0217 21:41:12.498742 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.602397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:12 crc kubenswrapper[4793]: I0217 21:41:12.654196 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsnxk"] Feb 17 21:41:14 crc kubenswrapper[4793]: I0217 21:41:14.516023 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bsnxk" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="registry-server" containerID="cri-o://66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337" gracePeriod=2 Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.050958 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.114242 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5mrp\" (UniqueName: \"kubernetes.io/projected/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-kube-api-access-v5mrp\") pod \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.114389 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-utilities\") pod \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.114494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-catalog-content\") pod \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\" (UID: \"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6\") " Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.115098 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-utilities" (OuterVolumeSpecName: "utilities") pod "ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" (UID: "ee4fe143-1d81-43bb-a6b6-dbddb6b117f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.129915 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-kube-api-access-v5mrp" (OuterVolumeSpecName: "kube-api-access-v5mrp") pod "ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" (UID: "ee4fe143-1d81-43bb-a6b6-dbddb6b117f6"). InnerVolumeSpecName "kube-api-access-v5mrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.217254 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5mrp\" (UniqueName: \"kubernetes.io/projected/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-kube-api-access-v5mrp\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.217290 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.297719 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" (UID: "ee4fe143-1d81-43bb-a6b6-dbddb6b117f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.318880 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.531256 4793 generic.go:334] "Generic (PLEG): container finished" podID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerID="66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337" exitCode=0 Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.531350 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsnxk" event={"ID":"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6","Type":"ContainerDied","Data":"66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337"} Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.531429 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsnxk" event={"ID":"ee4fe143-1d81-43bb-a6b6-dbddb6b117f6","Type":"ContainerDied","Data":"e1580759c2132bd087e58c9f93bd628432664ca7fec70efaa470291daea44fc8"} Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.531326 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsnxk" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.531450 4793 scope.go:117] "RemoveContainer" containerID="66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.565860 4793 scope.go:117] "RemoveContainer" containerID="2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.578749 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsnxk"] Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.587458 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bsnxk"] Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.595909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.595968 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.595986 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.600980 4793 scope.go:117] "RemoveContainer" containerID="93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.605373 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:41:15 crc kubenswrapper[4793]: E0217 21:41:15.605951 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.656456 4793 scope.go:117] "RemoveContainer" containerID="66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337" Feb 17 21:41:15 crc kubenswrapper[4793]: E0217 21:41:15.656941 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337\": container with ID starting with 66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337 not found: ID does not exist" containerID="66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.656993 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337"} err="failed to get container status \"66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337\": rpc error: code = NotFound desc = could not find container \"66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337\": container with ID starting with 66caf20873f7790cfa48d17ccd1a5da59c098f352a6cee6d3c94b9837afa1337 not found: ID does not exist" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.657020 4793 scope.go:117] "RemoveContainer" containerID="2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae" Feb 17 21:41:15 crc kubenswrapper[4793]: E0217 21:41:15.657549 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae\": container with ID starting with 2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae not found: ID does not exist" containerID="2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.657569 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae"} err="failed to get container status \"2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae\": rpc error: code = NotFound desc = could not find container \"2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae\": container with ID starting with 2db830f191e2482670b5b6736bdd91149892ba468bc53bee89c7fc968103ceae not found: ID does not exist" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.657582 4793 scope.go:117] "RemoveContainer" containerID="93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756" Feb 17 21:41:15 crc kubenswrapper[4793]: E0217 21:41:15.657866 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756\": container with ID starting with 93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756 not found: ID does not exist" containerID="93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756" Feb 17 21:41:15 crc kubenswrapper[4793]: I0217 21:41:15.657886 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756"} err="failed to get container status \"93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756\": rpc error: code = NotFound desc = could not find container \"93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756\": container with ID starting with 93d99b456713038391d71ce8ecb26e38198cc100fcf050bed205a16c4096e756 not found: ID does not exist" Feb 17 21:41:17 crc kubenswrapper[4793]: I0217 21:41:17.556529 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" path="/var/lib/kubelet/pods/ee4fe143-1d81-43bb-a6b6-dbddb6b117f6/volumes" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.065093 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tmrjr"] Feb 17 21:41:19 crc kubenswrapper[4793]: E0217 21:41:19.066121 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="registry-server" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.066135 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="registry-server" Feb 17 21:41:19 crc kubenswrapper[4793]: E0217 21:41:19.066166 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="extract-content" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.066173 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="extract-content" Feb 17 21:41:19 crc kubenswrapper[4793]: E0217 21:41:19.066214 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="extract-utilities" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.066222 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="extract-utilities" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.066440 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4fe143-1d81-43bb-a6b6-dbddb6b117f6" containerName="registry-server" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.068191 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.100239 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmrjr"] Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.204587 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-catalog-content\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.204635 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85fgv\" (UniqueName: \"kubernetes.io/projected/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-kube-api-access-85fgv\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.204762 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-utilities\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.306149 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-utilities\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.306299 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-catalog-content\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.306332 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85fgv\" (UniqueName: \"kubernetes.io/projected/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-kube-api-access-85fgv\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.306738 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-utilities\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.306936 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-catalog-content\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.325820 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85fgv\" (UniqueName: \"kubernetes.io/projected/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-kube-api-access-85fgv\") pod \"redhat-operators-tmrjr\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.424413 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:19 crc kubenswrapper[4793]: I0217 21:41:19.773524 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmrjr"] Feb 17 21:41:20 crc kubenswrapper[4793]: I0217 21:41:20.102246 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:41:20 crc kubenswrapper[4793]: I0217 21:41:20.102641 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:41:20 crc kubenswrapper[4793]: I0217 21:41:20.580629 4793 generic.go:334] "Generic (PLEG): container finished" podID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerID="2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af" exitCode=0 Feb 17 21:41:20 crc kubenswrapper[4793]: I0217 21:41:20.580678 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerDied","Data":"2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af"} Feb 17 21:41:20 crc kubenswrapper[4793]: I0217 21:41:20.581351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerStarted","Data":"700b61d9205342497cb24843cc2204f9a9e44e34fbe3eeac84e208c8e51dc0b9"} Feb 17 21:41:21 crc kubenswrapper[4793]: I0217 21:41:21.594477 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerStarted","Data":"80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035"} Feb 17 21:41:24 crc kubenswrapper[4793]: I0217 21:41:24.639260 4793 generic.go:334] "Generic (PLEG): container finished" podID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerID="80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035" exitCode=0 Feb 17 21:41:24 crc kubenswrapper[4793]: I0217 21:41:24.639556 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerDied","Data":"80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035"} Feb 17 21:41:25 crc kubenswrapper[4793]: I0217 21:41:25.652445 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerStarted","Data":"d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1"} Feb 17 21:41:25 crc kubenswrapper[4793]: I0217 21:41:25.680982 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tmrjr" podStartSLOduration=2.231568166 podStartE2EDuration="6.680959466s" podCreationTimestamp="2026-02-17 21:41:19 +0000 UTC" firstStartedPulling="2026-02-17 21:41:20.582591615 +0000 UTC m=+5555.874289926" lastFinishedPulling="2026-02-17 21:41:25.031982865 +0000 UTC m=+5560.323681226" observedRunningTime="2026-02-17 21:41:25.676940067 +0000 UTC m=+5560.968638378" watchObservedRunningTime="2026-02-17 21:41:25.680959466 +0000 UTC m=+5560.972657777" Feb 17 21:41:26 crc kubenswrapper[4793]: I0217 21:41:26.539199 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:41:26 crc kubenswrapper[4793]: E0217 21:41:26.540012 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:41:27 crc kubenswrapper[4793]: I0217 21:41:27.930001 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dj6tn"] Feb 17 21:41:27 crc kubenswrapper[4793]: I0217 21:41:27.932972 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:27 crc kubenswrapper[4793]: I0217 21:41:27.940863 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dj6tn"] Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.011243 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-utilities\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.011306 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-catalog-content\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.011395 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp95k\" (UniqueName: \"kubernetes.io/projected/fb84ace2-b5b9-4f32-a0a5-2141c540231d-kube-api-access-rp95k\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.113847 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-catalog-content\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.114008 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp95k\" (UniqueName: \"kubernetes.io/projected/fb84ace2-b5b9-4f32-a0a5-2141c540231d-kube-api-access-rp95k\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.114263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-utilities\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.114458 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-catalog-content\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.114835 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-utilities\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.140606 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp95k\" (UniqueName: \"kubernetes.io/projected/fb84ace2-b5b9-4f32-a0a5-2141c540231d-kube-api-access-rp95k\") pod \"community-operators-dj6tn\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.264020 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:28 crc kubenswrapper[4793]: I0217 21:41:28.758591 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dj6tn"] Feb 17 21:41:29 crc kubenswrapper[4793]: I0217 21:41:29.424787 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:29 crc kubenswrapper[4793]: I0217 21:41:29.424836 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:29 crc kubenswrapper[4793]: I0217 21:41:29.687665 4793 generic.go:334] "Generic (PLEG): container finished" podID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerID="5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80" exitCode=0 Feb 17 21:41:29 crc kubenswrapper[4793]: I0217 21:41:29.688022 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerDied","Data":"5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80"} Feb 17 21:41:29 crc kubenswrapper[4793]: I0217 21:41:29.688048 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerStarted","Data":"33d81f6debe2ab33992ebfe7759abad677fa693062de90217893861ee75a3c0f"} Feb 17 21:41:30 crc kubenswrapper[4793]: I0217 21:41:30.503276 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tmrjr" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="registry-server" probeResult="failure" output=< Feb 17 21:41:30 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:41:30 crc kubenswrapper[4793]: > Feb 17 21:41:30 crc kubenswrapper[4793]: I0217 21:41:30.700333 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerStarted","Data":"320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a"} Feb 17 21:41:32 crc kubenswrapper[4793]: I0217 21:41:32.723403 4793 generic.go:334] "Generic (PLEG): container finished" podID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerID="320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a" exitCode=0 Feb 17 21:41:32 crc kubenswrapper[4793]: I0217 21:41:32.723488 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerDied","Data":"320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a"} Feb 17 21:41:33 crc kubenswrapper[4793]: I0217 21:41:33.743474 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerStarted","Data":"e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a"} Feb 17 21:41:33 crc kubenswrapper[4793]: I0217 21:41:33.779082 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dj6tn" podStartSLOduration=3.309173715 podStartE2EDuration="6.779041962s" podCreationTimestamp="2026-02-17 21:41:27 +0000 UTC" firstStartedPulling="2026-02-17 21:41:29.689640789 +0000 UTC m=+5564.981339100" lastFinishedPulling="2026-02-17 21:41:33.159509026 +0000 UTC m=+5568.451207347" observedRunningTime="2026-02-17 21:41:33.772233495 +0000 UTC m=+5569.063931826" watchObservedRunningTime="2026-02-17 21:41:33.779041962 +0000 UTC m=+5569.070740303" Feb 17 21:41:37 crc kubenswrapper[4793]: I0217 21:41:37.539010 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:41:37 crc kubenswrapper[4793]: E0217 21:41:37.540622 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:41:38 crc kubenswrapper[4793]: I0217 21:41:38.264523 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:38 crc kubenswrapper[4793]: I0217 21:41:38.264580 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:39 crc kubenswrapper[4793]: I0217 21:41:39.340109 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dj6tn" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="registry-server" probeResult="failure" output=< Feb 17 21:41:39 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:41:39 crc kubenswrapper[4793]: > Feb 17 21:41:39 crc kubenswrapper[4793]: I0217 21:41:39.498899 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:39 crc kubenswrapper[4793]: I0217 21:41:39.563665 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:39 crc kubenswrapper[4793]: I0217 21:41:39.742636 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmrjr"] Feb 17 21:41:40 crc kubenswrapper[4793]: I0217 21:41:40.822425 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tmrjr" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="registry-server" containerID="cri-o://d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1" gracePeriod=2 Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.358226 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.409742 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-utilities\") pod \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.410007 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-catalog-content\") pod \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.410138 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85fgv\" (UniqueName: \"kubernetes.io/projected/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-kube-api-access-85fgv\") pod \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\" (UID: \"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5\") " Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.412775 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-utilities" (OuterVolumeSpecName: "utilities") pod "5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" (UID: "5da72a26-35f9-46e9-a0b6-b3ade2cb07f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.419886 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-kube-api-access-85fgv" (OuterVolumeSpecName: "kube-api-access-85fgv") pod "5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" (UID: "5da72a26-35f9-46e9-a0b6-b3ade2cb07f5"). InnerVolumeSpecName "kube-api-access-85fgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.512223 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85fgv\" (UniqueName: \"kubernetes.io/projected/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-kube-api-access-85fgv\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.512482 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.578980 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" (UID: "5da72a26-35f9-46e9-a0b6-b3ade2cb07f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.613891 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.843043 4793 generic.go:334] "Generic (PLEG): container finished" podID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerID="d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1" exitCode=0 Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.843105 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerDied","Data":"d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1"} Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.843151 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmrjr" event={"ID":"5da72a26-35f9-46e9-a0b6-b3ade2cb07f5","Type":"ContainerDied","Data":"700b61d9205342497cb24843cc2204f9a9e44e34fbe3eeac84e208c8e51dc0b9"} Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.843173 4793 scope.go:117] "RemoveContainer" containerID="d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.843246 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmrjr" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.884417 4793 scope.go:117] "RemoveContainer" containerID="80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.885715 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmrjr"] Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.894404 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tmrjr"] Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.923895 4793 scope.go:117] "RemoveContainer" containerID="2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.969128 4793 scope.go:117] "RemoveContainer" containerID="d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1" Feb 17 21:41:41 crc kubenswrapper[4793]: E0217 21:41:41.970167 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1\": container with ID starting with d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1 not found: ID does not exist" containerID="d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.970226 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1"} err="failed to get container status \"d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1\": rpc error: code = NotFound desc = could not find container \"d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1\": container with ID starting with d4cf9512c168a72d5d4953f849c1bf3230bc5604da9bab405588b9cc66951ae1 not found: ID does not exist" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.970266 4793 scope.go:117] "RemoveContainer" containerID="80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035" Feb 17 21:41:41 crc kubenswrapper[4793]: E0217 21:41:41.970658 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035\": container with ID starting with 80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035 not found: ID does not exist" containerID="80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.970762 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035"} err="failed to get container status \"80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035\": rpc error: code = NotFound desc = could not find container \"80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035\": container with ID starting with 80032e54f10d9e8ba403fe0659c40ee5a3c85e3d19711f64e39dd03203a75035 not found: ID does not exist" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.970831 4793 scope.go:117] "RemoveContainer" containerID="2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af" Feb 17 21:41:41 crc kubenswrapper[4793]: E0217 21:41:41.971199 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af\": container with ID starting with 2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af not found: ID does not exist" containerID="2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af" Feb 17 21:41:41 crc kubenswrapper[4793]: I0217 21:41:41.971235 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af"} err="failed to get container status \"2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af\": rpc error: code = NotFound desc = could not find container \"2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af\": container with ID starting with 2c928b4ac8a192f568ed362839428ef99e0c7f1804146999afaf5942e31858af not found: ID does not exist" Feb 17 21:41:43 crc kubenswrapper[4793]: I0217 21:41:43.559022 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" path="/var/lib/kubelet/pods/5da72a26-35f9-46e9-a0b6-b3ade2cb07f5/volumes" Feb 17 21:41:48 crc kubenswrapper[4793]: I0217 21:41:48.351558 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:48 crc kubenswrapper[4793]: I0217 21:41:48.452545 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:48 crc kubenswrapper[4793]: I0217 21:41:48.646539 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dj6tn"] Feb 17 21:41:49 crc kubenswrapper[4793]: I0217 21:41:49.922167 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dj6tn" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="registry-server" containerID="cri-o://e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a" gracePeriod=2 Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.101354 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.101413 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.416465 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.498378 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp95k\" (UniqueName: \"kubernetes.io/projected/fb84ace2-b5b9-4f32-a0a5-2141c540231d-kube-api-access-rp95k\") pod \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.498634 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-utilities\") pod \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.498741 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-catalog-content\") pod \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\" (UID: \"fb84ace2-b5b9-4f32-a0a5-2141c540231d\") " Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.499386 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-utilities" (OuterVolumeSpecName: "utilities") pod "fb84ace2-b5b9-4f32-a0a5-2141c540231d" (UID: "fb84ace2-b5b9-4f32-a0a5-2141c540231d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.508719 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb84ace2-b5b9-4f32-a0a5-2141c540231d-kube-api-access-rp95k" (OuterVolumeSpecName: "kube-api-access-rp95k") pod "fb84ace2-b5b9-4f32-a0a5-2141c540231d" (UID: "fb84ace2-b5b9-4f32-a0a5-2141c540231d"). InnerVolumeSpecName "kube-api-access-rp95k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.540113 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:41:50 crc kubenswrapper[4793]: E0217 21:41:50.540344 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.562919 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb84ace2-b5b9-4f32-a0a5-2141c540231d" (UID: "fb84ace2-b5b9-4f32-a0a5-2141c540231d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.601113 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp95k\" (UniqueName: \"kubernetes.io/projected/fb84ace2-b5b9-4f32-a0a5-2141c540231d-kube-api-access-rp95k\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.601137 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.601146 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb84ace2-b5b9-4f32-a0a5-2141c540231d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.939953 4793 generic.go:334] "Generic (PLEG): container finished" podID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerID="e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a" exitCode=0 Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.940047 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj6tn" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.940068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerDied","Data":"e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a"} Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.941088 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj6tn" event={"ID":"fb84ace2-b5b9-4f32-a0a5-2141c540231d","Type":"ContainerDied","Data":"33d81f6debe2ab33992ebfe7759abad677fa693062de90217893861ee75a3c0f"} Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.941132 4793 scope.go:117] "RemoveContainer" containerID="e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a" Feb 17 21:41:50 crc kubenswrapper[4793]: I0217 21:41:50.979001 4793 scope.go:117] "RemoveContainer" containerID="320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.031924 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dj6tn"] Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.043552 4793 scope.go:117] "RemoveContainer" containerID="5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.049294 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dj6tn"] Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.088762 4793 scope.go:117] "RemoveContainer" containerID="e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a" Feb 17 21:41:51 crc kubenswrapper[4793]: E0217 21:41:51.089289 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a\": container with ID starting with e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a not found: ID does not exist" containerID="e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.089340 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a"} err="failed to get container status \"e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a\": rpc error: code = NotFound desc = could not find container \"e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a\": container with ID starting with e5b4052f1c4ae59531bc05dbdc4d8d108ec0d504477e96fb1ddc5e34c8df423a not found: ID does not exist" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.089372 4793 scope.go:117] "RemoveContainer" containerID="320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a" Feb 17 21:41:51 crc kubenswrapper[4793]: E0217 21:41:51.089785 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a\": container with ID starting with 320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a not found: ID does not exist" containerID="320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.089815 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a"} err="failed to get container status \"320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a\": rpc error: code = NotFound desc = could not find container \"320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a\": container with ID starting with 320bfd467c888fa0e9c3e76b5eaca8a26befa5f92661d62d5f725dd503f9a16a not found: ID does not exist" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.089836 4793 scope.go:117] "RemoveContainer" containerID="5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80" Feb 17 21:41:51 crc kubenswrapper[4793]: E0217 21:41:51.090284 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80\": container with ID starting with 5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80 not found: ID does not exist" containerID="5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.090319 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80"} err="failed to get container status \"5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80\": rpc error: code = NotFound desc = could not find container \"5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80\": container with ID starting with 5ebe297bbfa7905dea56a7eb7c47e1712dcee57180c21bf6405bdefdf8aeaf80 not found: ID does not exist" Feb 17 21:41:51 crc kubenswrapper[4793]: I0217 21:41:51.558387 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" path="/var/lib/kubelet/pods/fb84ace2-b5b9-4f32-a0a5-2141c540231d/volumes" Feb 17 21:42:01 crc kubenswrapper[4793]: I0217 21:42:01.540146 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:42:01 crc kubenswrapper[4793]: E0217 21:42:01.541527 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:42:12 crc kubenswrapper[4793]: I0217 21:42:12.539546 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:42:12 crc kubenswrapper[4793]: E0217 21:42:12.540462 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.102522 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.103253 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.103311 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.104388 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.104487 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" gracePeriod=600 Feb 17 21:42:20 crc kubenswrapper[4793]: E0217 21:42:20.234787 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.287064 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" exitCode=0 Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.287107 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582"} Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.287140 4793 scope.go:117] "RemoveContainer" containerID="3e8a8dab73fd74db859b33d53c345bd888043d3a6b4c0f585517de04e6dde304" Feb 17 21:42:20 crc kubenswrapper[4793]: I0217 21:42:20.288078 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:42:20 crc kubenswrapper[4793]: E0217 21:42:20.288475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:42:23 crc kubenswrapper[4793]: I0217 21:42:23.539713 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:42:23 crc kubenswrapper[4793]: E0217 21:42:23.540509 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:42:33 crc kubenswrapper[4793]: I0217 21:42:33.538789 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:42:33 crc kubenswrapper[4793]: E0217 21:42:33.539590 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:42:36 crc kubenswrapper[4793]: I0217 21:42:36.539304 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:42:36 crc kubenswrapper[4793]: E0217 21:42:36.540056 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:42:48 crc kubenswrapper[4793]: I0217 21:42:48.539239 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:42:48 crc kubenswrapper[4793]: E0217 21:42:48.540369 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:42:49 crc kubenswrapper[4793]: I0217 21:42:49.538859 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:42:49 crc kubenswrapper[4793]: E0217 21:42:49.539526 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:43:01 crc kubenswrapper[4793]: I0217 21:43:01.539586 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:43:01 crc kubenswrapper[4793]: E0217 21:43:01.541099 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:43:02 crc kubenswrapper[4793]: I0217 21:43:02.539076 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:43:02 crc kubenswrapper[4793]: E0217 21:43:02.539947 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:43:14 crc kubenswrapper[4793]: I0217 21:43:14.539655 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:43:14 crc kubenswrapper[4793]: E0217 21:43:14.540420 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:43:15 crc kubenswrapper[4793]: I0217 21:43:15.559934 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:43:15 crc kubenswrapper[4793]: E0217 21:43:15.560440 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:43:27 crc kubenswrapper[4793]: I0217 21:43:27.553782 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:43:27 crc kubenswrapper[4793]: E0217 21:43:27.554947 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:43:29 crc kubenswrapper[4793]: I0217 21:43:29.538446 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:43:29 crc kubenswrapper[4793]: E0217 21:43:29.538947 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:43:41 crc kubenswrapper[4793]: I0217 21:43:41.539768 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:43:41 crc kubenswrapper[4793]: E0217 21:43:41.540665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:43:42 crc kubenswrapper[4793]: I0217 21:43:42.538823 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:43:42 crc kubenswrapper[4793]: E0217 21:43:42.539391 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:43:55 crc kubenswrapper[4793]: I0217 21:43:55.547067 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:43:55 crc kubenswrapper[4793]: I0217 21:43:55.547734 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:43:55 crc kubenswrapper[4793]: E0217 21:43:55.548091 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:43:55 crc kubenswrapper[4793]: E0217 21:43:55.548091 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:44:06 crc kubenswrapper[4793]: I0217 21:44:06.539206 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:44:06 crc kubenswrapper[4793]: E0217 21:44:06.539980 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:44:08 crc kubenswrapper[4793]: I0217 21:44:08.539330 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:44:08 crc kubenswrapper[4793]: E0217 21:44:08.540044 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:44:21 crc kubenswrapper[4793]: I0217 21:44:21.538779 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:44:21 crc kubenswrapper[4793]: E0217 21:44:21.539703 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:44:23 crc kubenswrapper[4793]: I0217 21:44:23.539373 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:44:23 crc kubenswrapper[4793]: E0217 21:44:23.540546 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:44:36 crc kubenswrapper[4793]: I0217 21:44:36.539096 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:44:36 crc kubenswrapper[4793]: E0217 21:44:36.539959 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:44:36 crc kubenswrapper[4793]: I0217 21:44:36.540811 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:44:36 crc kubenswrapper[4793]: E0217 21:44:36.541134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:44:47 crc kubenswrapper[4793]: I0217 21:44:47.538863 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:44:47 crc kubenswrapper[4793]: E0217 21:44:47.539670 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:44:50 crc kubenswrapper[4793]: I0217 21:44:50.538871 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:44:50 crc kubenswrapper[4793]: E0217 21:44:50.539653 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:44:59 crc kubenswrapper[4793]: I0217 21:44:59.540355 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:44:59 crc kubenswrapper[4793]: E0217 21:44:59.541223 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.170180 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn"] Feb 17 21:45:00 crc kubenswrapper[4793]: E0217 21:45:00.170849 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="extract-content" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.170876 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="extract-content" Feb 17 21:45:00 crc kubenswrapper[4793]: E0217 21:45:00.170915 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="extract-utilities" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.170928 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="extract-utilities" Feb 17 21:45:00 crc kubenswrapper[4793]: E0217 21:45:00.170982 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="registry-server" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.170997 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="registry-server" Feb 17 21:45:00 crc kubenswrapper[4793]: E0217 21:45:00.171028 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="extract-utilities" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.171040 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="extract-utilities" Feb 17 21:45:00 crc kubenswrapper[4793]: E0217 21:45:00.171055 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="extract-content" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.171067 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="extract-content" Feb 17 21:45:00 crc kubenswrapper[4793]: E0217 21:45:00.171098 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="registry-server" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.171109 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="registry-server" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.171449 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb84ace2-b5b9-4f32-a0a5-2141c540231d" containerName="registry-server" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.171500 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da72a26-35f9-46e9-a0b6-b3ade2cb07f5" containerName="registry-server" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.172579 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.174750 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.175922 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.185295 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn"] Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.266218 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-config-volume\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.266351 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-secret-volume\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.266377 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6fd\" (UniqueName: \"kubernetes.io/projected/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-kube-api-access-hw6fd\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.368161 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-config-volume\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.368302 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-secret-volume\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.368330 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw6fd\" (UniqueName: \"kubernetes.io/projected/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-kube-api-access-hw6fd\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.369761 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-config-volume\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.376866 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-secret-volume\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.392917 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw6fd\" (UniqueName: \"kubernetes.io/projected/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-kube-api-access-hw6fd\") pod \"collect-profiles-29522745-scvkn\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:00 crc kubenswrapper[4793]: I0217 21:45:00.507958 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:01 crc kubenswrapper[4793]: I0217 21:45:01.036646 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn"] Feb 17 21:45:01 crc kubenswrapper[4793]: I0217 21:45:01.290887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" event={"ID":"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc","Type":"ContainerStarted","Data":"8820ff16144d26a1ac6405a5f678667ac577a3f1476d70952dadb3929f1b3a10"} Feb 17 21:45:01 crc kubenswrapper[4793]: I0217 21:45:01.290965 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" event={"ID":"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc","Type":"ContainerStarted","Data":"5dc7473b7e2287e2625c21b3e2114c9b1b45d38aa38280b1d26d92e7a2bec085"} Feb 17 21:45:01 crc kubenswrapper[4793]: I0217 21:45:01.304902 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" podStartSLOduration=1.304881683 podStartE2EDuration="1.304881683s" podCreationTimestamp="2026-02-17 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 21:45:01.30437085 +0000 UTC m=+5776.596069161" watchObservedRunningTime="2026-02-17 21:45:01.304881683 +0000 UTC m=+5776.596579994" Feb 17 21:45:01 crc kubenswrapper[4793]: I0217 21:45:01.542364 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:45:01 crc kubenswrapper[4793]: E0217 21:45:01.542853 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:45:02 crc kubenswrapper[4793]: I0217 21:45:02.302404 4793 generic.go:334] "Generic (PLEG): container finished" podID="729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" containerID="8820ff16144d26a1ac6405a5f678667ac577a3f1476d70952dadb3929f1b3a10" exitCode=0 Feb 17 21:45:02 crc kubenswrapper[4793]: I0217 21:45:02.302462 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" event={"ID":"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc","Type":"ContainerDied","Data":"8820ff16144d26a1ac6405a5f678667ac577a3f1476d70952dadb3929f1b3a10"} Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.753656 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.859760 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw6fd\" (UniqueName: \"kubernetes.io/projected/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-kube-api-access-hw6fd\") pod \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.860128 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-config-volume\") pod \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.860159 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-secret-volume\") pod \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\" (UID: \"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc\") " Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.860772 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-config-volume" (OuterVolumeSpecName: "config-volume") pod "729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" (UID: "729cca98-7dbc-4621-a0e0-5ffbf3a59ffc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.860991 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.867818 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" (UID: "729cca98-7dbc-4621-a0e0-5ffbf3a59ffc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.869918 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-kube-api-access-hw6fd" (OuterVolumeSpecName: "kube-api-access-hw6fd") pod "729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" (UID: "729cca98-7dbc-4621-a0e0-5ffbf3a59ffc"). InnerVolumeSpecName "kube-api-access-hw6fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.962365 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw6fd\" (UniqueName: \"kubernetes.io/projected/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-kube-api-access-hw6fd\") on node \"crc\" DevicePath \"\"" Feb 17 21:45:03 crc kubenswrapper[4793]: I0217 21:45:03.962393 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 21:45:04 crc kubenswrapper[4793]: I0217 21:45:04.326652 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" event={"ID":"729cca98-7dbc-4621-a0e0-5ffbf3a59ffc","Type":"ContainerDied","Data":"5dc7473b7e2287e2625c21b3e2114c9b1b45d38aa38280b1d26d92e7a2bec085"} Feb 17 21:45:04 crc kubenswrapper[4793]: I0217 21:45:04.326706 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn" Feb 17 21:45:04 crc kubenswrapper[4793]: I0217 21:45:04.326713 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc7473b7e2287e2625c21b3e2114c9b1b45d38aa38280b1d26d92e7a2bec085" Feb 17 21:45:04 crc kubenswrapper[4793]: I0217 21:45:04.390225 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr"] Feb 17 21:45:04 crc kubenswrapper[4793]: I0217 21:45:04.398561 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522700-kvqnr"] Feb 17 21:45:05 crc kubenswrapper[4793]: I0217 21:45:05.556472 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954" path="/var/lib/kubelet/pods/dd1ae9b3-e8ae-4b56-8d96-2a0e5ff51954/volumes" Feb 17 21:45:12 crc kubenswrapper[4793]: I0217 21:45:12.539352 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:45:12 crc kubenswrapper[4793]: E0217 21:45:12.540212 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:45:12 crc kubenswrapper[4793]: I0217 21:45:12.540248 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:45:12 crc kubenswrapper[4793]: E0217 21:45:12.540465 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:45:23 crc kubenswrapper[4793]: I0217 21:45:23.541521 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:45:23 crc kubenswrapper[4793]: E0217 21:45:23.542661 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:45:25 crc kubenswrapper[4793]: I0217 21:45:25.461339 4793 scope.go:117] "RemoveContainer" containerID="d0c2be443e488121b83ed1192a0c135db74dd16c9577c6f7171518596105af75" Feb 17 21:45:27 crc kubenswrapper[4793]: I0217 21:45:27.538604 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:45:27 crc kubenswrapper[4793]: E0217 21:45:27.539057 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:45:36 crc kubenswrapper[4793]: I0217 21:45:36.540071 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:45:36 crc kubenswrapper[4793]: E0217 21:45:36.540945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:45:41 crc kubenswrapper[4793]: I0217 21:45:41.540101 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:45:41 crc kubenswrapper[4793]: E0217 21:45:41.541558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:45:50 crc kubenswrapper[4793]: I0217 21:45:50.539572 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:45:50 crc kubenswrapper[4793]: E0217 21:45:50.540920 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:45:54 crc kubenswrapper[4793]: I0217 21:45:54.539589 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:45:54 crc kubenswrapper[4793]: E0217 21:45:54.541290 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:46:04 crc kubenswrapper[4793]: I0217 21:46:04.538965 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:46:04 crc kubenswrapper[4793]: E0217 21:46:04.539872 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:46:07 crc kubenswrapper[4793]: I0217 21:46:07.538905 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:46:07 crc kubenswrapper[4793]: E0217 21:46:07.539681 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:46:15 crc kubenswrapper[4793]: I0217 21:46:15.551150 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:46:16 crc kubenswrapper[4793]: I0217 21:46:16.219753 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf"} Feb 17 21:46:18 crc kubenswrapper[4793]: I0217 21:46:18.245506 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" exitCode=1 Feb 17 21:46:18 crc kubenswrapper[4793]: I0217 21:46:18.245592 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf"} Feb 17 21:46:18 crc kubenswrapper[4793]: I0217 21:46:18.245978 4793 scope.go:117] "RemoveContainer" containerID="16ae0bc62079ea08d5d218f318b9535e2fbd0e475d0f8561a0f2f81e8694bb35" Feb 17 21:46:18 crc kubenswrapper[4793]: I0217 21:46:18.264083 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:46:18 crc kubenswrapper[4793]: E0217 21:46:18.265206 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:46:20 crc kubenswrapper[4793]: I0217 21:46:20.596349 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:46:20 crc kubenswrapper[4793]: I0217 21:46:20.597584 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:46:20 crc kubenswrapper[4793]: E0217 21:46:20.597878 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:46:21 crc kubenswrapper[4793]: I0217 21:46:21.539092 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:46:21 crc kubenswrapper[4793]: E0217 21:46:21.539491 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:46:25 crc kubenswrapper[4793]: I0217 21:46:25.596458 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:46:25 crc kubenswrapper[4793]: I0217 21:46:25.597104 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:46:25 crc kubenswrapper[4793]: I0217 21:46:25.597121 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:46:25 crc kubenswrapper[4793]: I0217 21:46:25.597922 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:46:25 crc kubenswrapper[4793]: E0217 21:46:25.598191 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:46:35 crc kubenswrapper[4793]: I0217 21:46:35.552748 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:46:35 crc kubenswrapper[4793]: E0217 21:46:35.553885 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.539485 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:46:40 crc kubenswrapper[4793]: E0217 21:46:40.540310 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.708418 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8sk62"] Feb 17 21:46:40 crc kubenswrapper[4793]: E0217 21:46:40.708977 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" containerName="collect-profiles" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.709000 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" containerName="collect-profiles" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.709239 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" containerName="collect-profiles" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.711248 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.751480 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sk62"] Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.831063 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zsqp\" (UniqueName: \"kubernetes.io/projected/08e6fb71-160e-49d3-af26-54f95410be06-kube-api-access-5zsqp\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.831153 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-utilities\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.831214 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-catalog-content\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.933078 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-catalog-content\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.933300 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zsqp\" (UniqueName: \"kubernetes.io/projected/08e6fb71-160e-49d3-af26-54f95410be06-kube-api-access-5zsqp\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.933356 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-utilities\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.933615 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-catalog-content\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.933861 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-utilities\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:40 crc kubenswrapper[4793]: I0217 21:46:40.961116 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zsqp\" (UniqueName: \"kubernetes.io/projected/08e6fb71-160e-49d3-af26-54f95410be06-kube-api-access-5zsqp\") pod \"redhat-marketplace-8sk62\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:41 crc kubenswrapper[4793]: I0217 21:46:41.050427 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:41 crc kubenswrapper[4793]: W0217 21:46:41.552505 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e6fb71_160e_49d3_af26_54f95410be06.slice/crio-11d857e01b09727db6449bb2060284c3ffa29dd4b391b171637e7e70b1f58137 WatchSource:0}: Error finding container 11d857e01b09727db6449bb2060284c3ffa29dd4b391b171637e7e70b1f58137: Status 404 returned error can't find the container with id 11d857e01b09727db6449bb2060284c3ffa29dd4b391b171637e7e70b1f58137 Feb 17 21:46:41 crc kubenswrapper[4793]: I0217 21:46:41.562018 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sk62"] Feb 17 21:46:42 crc kubenswrapper[4793]: I0217 21:46:42.523771 4793 generic.go:334] "Generic (PLEG): container finished" podID="08e6fb71-160e-49d3-af26-54f95410be06" containerID="7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107" exitCode=0 Feb 17 21:46:42 crc kubenswrapper[4793]: I0217 21:46:42.523940 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sk62" event={"ID":"08e6fb71-160e-49d3-af26-54f95410be06","Type":"ContainerDied","Data":"7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107"} Feb 17 21:46:42 crc kubenswrapper[4793]: I0217 21:46:42.524224 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sk62" event={"ID":"08e6fb71-160e-49d3-af26-54f95410be06","Type":"ContainerStarted","Data":"11d857e01b09727db6449bb2060284c3ffa29dd4b391b171637e7e70b1f58137"} Feb 17 21:46:42 crc kubenswrapper[4793]: I0217 21:46:42.528100 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:46:44 crc kubenswrapper[4793]: I0217 21:46:44.562039 4793 generic.go:334] "Generic (PLEG): container finished" podID="08e6fb71-160e-49d3-af26-54f95410be06" containerID="598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499" exitCode=0 Feb 17 21:46:44 crc kubenswrapper[4793]: I0217 21:46:44.562354 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sk62" event={"ID":"08e6fb71-160e-49d3-af26-54f95410be06","Type":"ContainerDied","Data":"598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499"} Feb 17 21:46:45 crc kubenswrapper[4793]: I0217 21:46:45.574912 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sk62" event={"ID":"08e6fb71-160e-49d3-af26-54f95410be06","Type":"ContainerStarted","Data":"8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5"} Feb 17 21:46:45 crc kubenswrapper[4793]: I0217 21:46:45.616896 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8sk62" podStartSLOduration=3.169254769 podStartE2EDuration="5.61686967s" podCreationTimestamp="2026-02-17 21:46:40 +0000 UTC" firstStartedPulling="2026-02-17 21:46:42.527442964 +0000 UTC m=+5877.819141325" lastFinishedPulling="2026-02-17 21:46:44.975057865 +0000 UTC m=+5880.266756226" observedRunningTime="2026-02-17 21:46:45.59496267 +0000 UTC m=+5880.886660991" watchObservedRunningTime="2026-02-17 21:46:45.61686967 +0000 UTC m=+5880.908568021" Feb 17 21:46:50 crc kubenswrapper[4793]: I0217 21:46:50.538894 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:46:50 crc kubenswrapper[4793]: E0217 21:46:50.539499 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:46:51 crc kubenswrapper[4793]: I0217 21:46:51.051158 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:51 crc kubenswrapper[4793]: I0217 21:46:51.051243 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:51 crc kubenswrapper[4793]: I0217 21:46:51.129359 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:51 crc kubenswrapper[4793]: I0217 21:46:51.693813 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:51 crc kubenswrapper[4793]: I0217 21:46:51.771119 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sk62"] Feb 17 21:46:53 crc kubenswrapper[4793]: I0217 21:46:53.539826 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:46:53 crc kubenswrapper[4793]: E0217 21:46:53.540239 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:46:53 crc kubenswrapper[4793]: I0217 21:46:53.676711 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8sk62" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="registry-server" containerID="cri-o://8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5" gracePeriod=2 Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.232897 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.342248 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-utilities\") pod \"08e6fb71-160e-49d3-af26-54f95410be06\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.342350 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zsqp\" (UniqueName: \"kubernetes.io/projected/08e6fb71-160e-49d3-af26-54f95410be06-kube-api-access-5zsqp\") pod \"08e6fb71-160e-49d3-af26-54f95410be06\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.342437 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-catalog-content\") pod \"08e6fb71-160e-49d3-af26-54f95410be06\" (UID: \"08e6fb71-160e-49d3-af26-54f95410be06\") " Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.343725 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-utilities" (OuterVolumeSpecName: "utilities") pod "08e6fb71-160e-49d3-af26-54f95410be06" (UID: "08e6fb71-160e-49d3-af26-54f95410be06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.355807 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e6fb71-160e-49d3-af26-54f95410be06-kube-api-access-5zsqp" (OuterVolumeSpecName: "kube-api-access-5zsqp") pod "08e6fb71-160e-49d3-af26-54f95410be06" (UID: "08e6fb71-160e-49d3-af26-54f95410be06"). InnerVolumeSpecName "kube-api-access-5zsqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.365820 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08e6fb71-160e-49d3-af26-54f95410be06" (UID: "08e6fb71-160e-49d3-af26-54f95410be06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.445086 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.445126 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zsqp\" (UniqueName: \"kubernetes.io/projected/08e6fb71-160e-49d3-af26-54f95410be06-kube-api-access-5zsqp\") on node \"crc\" DevicePath \"\"" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.445141 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e6fb71-160e-49d3-af26-54f95410be06-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.692945 4793 generic.go:334] "Generic (PLEG): container finished" podID="08e6fb71-160e-49d3-af26-54f95410be06" containerID="8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5" exitCode=0 Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.693007 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sk62" event={"ID":"08e6fb71-160e-49d3-af26-54f95410be06","Type":"ContainerDied","Data":"8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5"} Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.693061 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8sk62" event={"ID":"08e6fb71-160e-49d3-af26-54f95410be06","Type":"ContainerDied","Data":"11d857e01b09727db6449bb2060284c3ffa29dd4b391b171637e7e70b1f58137"} Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.693071 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8sk62" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.693103 4793 scope.go:117] "RemoveContainer" containerID="8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.755574 4793 scope.go:117] "RemoveContainer" containerID="598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.764982 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sk62"] Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.792006 4793 scope.go:117] "RemoveContainer" containerID="7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.802228 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8sk62"] Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.848152 4793 scope.go:117] "RemoveContainer" containerID="8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5" Feb 17 21:46:54 crc kubenswrapper[4793]: E0217 21:46:54.849155 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5\": container with ID starting with 8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5 not found: ID does not exist" containerID="8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.849284 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5"} err="failed to get container status \"8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5\": rpc error: code = NotFound desc = could not find container \"8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5\": container with ID starting with 8feefb6ce068730f65a9434f76a25b8c038c1131d686ad066c3df6d0752acfa5 not found: ID does not exist" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.849340 4793 scope.go:117] "RemoveContainer" containerID="598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499" Feb 17 21:46:54 crc kubenswrapper[4793]: E0217 21:46:54.850038 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499\": container with ID starting with 598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499 not found: ID does not exist" containerID="598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.850089 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499"} err="failed to get container status \"598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499\": rpc error: code = NotFound desc = could not find container \"598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499\": container with ID starting with 598bf465a4ed5b7102f2d28710483eb8c0516c3c65c1ffc422f471b6f11e2499 not found: ID does not exist" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.850125 4793 scope.go:117] "RemoveContainer" containerID="7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107" Feb 17 21:46:54 crc kubenswrapper[4793]: E0217 21:46:54.851937 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107\": container with ID starting with 7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107 not found: ID does not exist" containerID="7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107" Feb 17 21:46:54 crc kubenswrapper[4793]: I0217 21:46:54.852005 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107"} err="failed to get container status \"7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107\": rpc error: code = NotFound desc = could not find container \"7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107\": container with ID starting with 7bdc61d26a1a48c6bf57ada9aa6de9f0ec9582ada4e9b20fd2a618fcdf17f107 not found: ID does not exist" Feb 17 21:46:55 crc kubenswrapper[4793]: I0217 21:46:55.558266 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e6fb71-160e-49d3-af26-54f95410be06" path="/var/lib/kubelet/pods/08e6fb71-160e-49d3-af26-54f95410be06/volumes" Feb 17 21:47:03 crc kubenswrapper[4793]: I0217 21:47:03.540779 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:47:03 crc kubenswrapper[4793]: E0217 21:47:03.541403 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:47:08 crc kubenswrapper[4793]: I0217 21:47:08.539455 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:47:08 crc kubenswrapper[4793]: E0217 21:47:08.540492 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:47:16 crc kubenswrapper[4793]: I0217 21:47:16.539821 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:47:16 crc kubenswrapper[4793]: E0217 21:47:16.541287 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:47:23 crc kubenswrapper[4793]: I0217 21:47:23.542299 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:47:23 crc kubenswrapper[4793]: E0217 21:47:23.543150 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:47:27 crc kubenswrapper[4793]: I0217 21:47:27.539234 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:47:28 crc kubenswrapper[4793]: I0217 21:47:28.112679 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"061214a44e3d256b369b9ba25b7f8b854564eeba298c45a9ca3074abd411552b"} Feb 17 21:47:37 crc kubenswrapper[4793]: I0217 21:47:37.539986 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:47:37 crc kubenswrapper[4793]: E0217 21:47:37.541994 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:47:51 crc kubenswrapper[4793]: I0217 21:47:51.539638 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:47:51 crc kubenswrapper[4793]: E0217 21:47:51.540843 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:48:06 crc kubenswrapper[4793]: I0217 21:48:06.539365 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:48:06 crc kubenswrapper[4793]: E0217 21:48:06.540468 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:48:18 crc kubenswrapper[4793]: I0217 21:48:18.539109 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:48:18 crc kubenswrapper[4793]: E0217 21:48:18.540228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:48:30 crc kubenswrapper[4793]: I0217 21:48:30.539542 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:48:30 crc kubenswrapper[4793]: E0217 21:48:30.540312 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:48:45 crc kubenswrapper[4793]: I0217 21:48:45.546441 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:48:45 crc kubenswrapper[4793]: E0217 21:48:45.547454 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:48:59 crc kubenswrapper[4793]: I0217 21:48:59.538632 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:48:59 crc kubenswrapper[4793]: E0217 21:48:59.539978 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:49:10 crc kubenswrapper[4793]: I0217 21:49:10.538738 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:49:10 crc kubenswrapper[4793]: E0217 21:49:10.539830 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:49:21 crc kubenswrapper[4793]: I0217 21:49:21.538823 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:49:21 crc kubenswrapper[4793]: E0217 21:49:21.539649 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:49:35 crc kubenswrapper[4793]: I0217 21:49:35.547773 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:49:35 crc kubenswrapper[4793]: E0217 21:49:35.550218 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:49:50 crc kubenswrapper[4793]: I0217 21:49:50.101829 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:49:50 crc kubenswrapper[4793]: I0217 21:49:50.102548 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:49:50 crc kubenswrapper[4793]: I0217 21:49:50.540094 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:49:50 crc kubenswrapper[4793]: E0217 21:49:50.540822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:50:03 crc kubenswrapper[4793]: I0217 21:50:03.539965 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:50:03 crc kubenswrapper[4793]: E0217 21:50:03.541940 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:50:16 crc kubenswrapper[4793]: I0217 21:50:16.539002 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:50:16 crc kubenswrapper[4793]: E0217 21:50:16.539748 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:50:20 crc kubenswrapper[4793]: I0217 21:50:20.101944 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:50:20 crc kubenswrapper[4793]: I0217 21:50:20.102586 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:50:31 crc kubenswrapper[4793]: I0217 21:50:31.539619 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:50:31 crc kubenswrapper[4793]: E0217 21:50:31.541012 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:50:42 crc kubenswrapper[4793]: I0217 21:50:42.539991 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:50:42 crc kubenswrapper[4793]: E0217 21:50:42.541330 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.101986 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.103241 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.103312 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.104156 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"061214a44e3d256b369b9ba25b7f8b854564eeba298c45a9ca3074abd411552b"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.104229 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://061214a44e3d256b369b9ba25b7f8b854564eeba298c45a9ca3074abd411552b" gracePeriod=600 Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.473269 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="061214a44e3d256b369b9ba25b7f8b854564eeba298c45a9ca3074abd411552b" exitCode=0 Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.473369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"061214a44e3d256b369b9ba25b7f8b854564eeba298c45a9ca3074abd411552b"} Feb 17 21:50:50 crc kubenswrapper[4793]: I0217 21:50:50.473595 4793 scope.go:117] "RemoveContainer" containerID="4ddebe9fa50fa6f3cacc64e878425ab4a7565e37cff2b52fead4d0445814f582" Feb 17 21:50:51 crc kubenswrapper[4793]: I0217 21:50:51.491135 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd"} Feb 17 21:50:54 crc kubenswrapper[4793]: I0217 21:50:54.539210 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:50:54 crc kubenswrapper[4793]: E0217 21:50:54.540145 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:05 crc kubenswrapper[4793]: I0217 21:51:05.547721 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:51:05 crc kubenswrapper[4793]: E0217 21:51:05.550163 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.383789 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5lnhh"] Feb 17 21:51:12 crc kubenswrapper[4793]: E0217 21:51:12.384948 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="extract-content" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.384964 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="extract-content" Feb 17 21:51:12 crc kubenswrapper[4793]: E0217 21:51:12.384978 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="extract-utilities" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.384989 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="extract-utilities" Feb 17 21:51:12 crc kubenswrapper[4793]: E0217 21:51:12.385001 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="registry-server" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.385010 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="registry-server" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.385267 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e6fb71-160e-49d3-af26-54f95410be06" containerName="registry-server" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.387499 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.397254 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5lnhh"] Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.400451 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-catalog-content\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.400501 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-utilities\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.400722 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmlg\" (UniqueName: \"kubernetes.io/projected/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-kube-api-access-cvmlg\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.501484 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmlg\" (UniqueName: \"kubernetes.io/projected/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-kube-api-access-cvmlg\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.501875 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-catalog-content\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.501914 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-utilities\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.502489 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-utilities\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.502492 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-catalog-content\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.537615 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmlg\" (UniqueName: \"kubernetes.io/projected/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-kube-api-access-cvmlg\") pod \"certified-operators-5lnhh\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:12 crc kubenswrapper[4793]: I0217 21:51:12.714235 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:13 crc kubenswrapper[4793]: I0217 21:51:13.293703 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5lnhh"] Feb 17 21:51:13 crc kubenswrapper[4793]: I0217 21:51:13.780201 4793 generic.go:334] "Generic (PLEG): container finished" podID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerID="4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9" exitCode=0 Feb 17 21:51:13 crc kubenswrapper[4793]: I0217 21:51:13.780279 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lnhh" event={"ID":"0f884b72-224b-4a4f-b3d5-a6e47b56e90c","Type":"ContainerDied","Data":"4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9"} Feb 17 21:51:13 crc kubenswrapper[4793]: I0217 21:51:13.780732 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lnhh" event={"ID":"0f884b72-224b-4a4f-b3d5-a6e47b56e90c","Type":"ContainerStarted","Data":"b942449e2abce4deb24a763ae5d5ffe0d1cc744d773837a5f93ea077ca725017"} Feb 17 21:51:15 crc kubenswrapper[4793]: I0217 21:51:15.811225 4793 generic.go:334] "Generic (PLEG): container finished" podID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerID="e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7" exitCode=0 Feb 17 21:51:15 crc kubenswrapper[4793]: I0217 21:51:15.811324 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lnhh" event={"ID":"0f884b72-224b-4a4f-b3d5-a6e47b56e90c","Type":"ContainerDied","Data":"e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7"} Feb 17 21:51:16 crc kubenswrapper[4793]: I0217 21:51:16.827497 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lnhh" event={"ID":"0f884b72-224b-4a4f-b3d5-a6e47b56e90c","Type":"ContainerStarted","Data":"e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42"} Feb 17 21:51:16 crc kubenswrapper[4793]: I0217 21:51:16.869400 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5lnhh" podStartSLOduration=2.396752916 podStartE2EDuration="4.869371874s" podCreationTimestamp="2026-02-17 21:51:12 +0000 UTC" firstStartedPulling="2026-02-17 21:51:13.783029363 +0000 UTC m=+6149.074727704" lastFinishedPulling="2026-02-17 21:51:16.255648321 +0000 UTC m=+6151.547346662" observedRunningTime="2026-02-17 21:51:16.858353682 +0000 UTC m=+6152.150052043" watchObservedRunningTime="2026-02-17 21:51:16.869371874 +0000 UTC m=+6152.161070215" Feb 17 21:51:18 crc kubenswrapper[4793]: I0217 21:51:18.538899 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:51:18 crc kubenswrapper[4793]: I0217 21:51:18.853452 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5"} Feb 17 21:51:20 crc kubenswrapper[4793]: I0217 21:51:20.596062 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:51:21 crc kubenswrapper[4793]: I0217 21:51:21.892422 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" exitCode=1 Feb 17 21:51:21 crc kubenswrapper[4793]: I0217 21:51:21.892543 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5"} Feb 17 21:51:21 crc kubenswrapper[4793]: I0217 21:51:21.892935 4793 scope.go:117] "RemoveContainer" containerID="aa3a58e39c1c1a355c5bbe7309b14b49a9355e2c4c9531906002af4b43f2e2cf" Feb 17 21:51:21 crc kubenswrapper[4793]: I0217 21:51:21.893898 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:51:21 crc kubenswrapper[4793]: E0217 21:51:21.894404 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:22 crc kubenswrapper[4793]: I0217 21:51:22.714873 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:22 crc kubenswrapper[4793]: I0217 21:51:22.715270 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:22 crc kubenswrapper[4793]: I0217 21:51:22.792182 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:22 crc kubenswrapper[4793]: I0217 21:51:22.972455 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:23 crc kubenswrapper[4793]: I0217 21:51:23.041398 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5lnhh"] Feb 17 21:51:24 crc kubenswrapper[4793]: I0217 21:51:24.929378 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5lnhh" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="registry-server" containerID="cri-o://e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42" gracePeriod=2 Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.555295 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.596086 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.596135 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.596145 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.596889 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:51:25 crc kubenswrapper[4793]: E0217 21:51:25.597115 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.729869 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmlg\" (UniqueName: \"kubernetes.io/projected/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-kube-api-access-cvmlg\") pod \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.730029 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-utilities\") pod \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.730071 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-catalog-content\") pod \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\" (UID: \"0f884b72-224b-4a4f-b3d5-a6e47b56e90c\") " Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.731177 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-utilities" (OuterVolumeSpecName: "utilities") pod "0f884b72-224b-4a4f-b3d5-a6e47b56e90c" (UID: "0f884b72-224b-4a4f-b3d5-a6e47b56e90c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.744425 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-kube-api-access-cvmlg" (OuterVolumeSpecName: "kube-api-access-cvmlg") pod "0f884b72-224b-4a4f-b3d5-a6e47b56e90c" (UID: "0f884b72-224b-4a4f-b3d5-a6e47b56e90c"). InnerVolumeSpecName "kube-api-access-cvmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.781812 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f884b72-224b-4a4f-b3d5-a6e47b56e90c" (UID: "0f884b72-224b-4a4f-b3d5-a6e47b56e90c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.833316 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvmlg\" (UniqueName: \"kubernetes.io/projected/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-kube-api-access-cvmlg\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.833359 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.833372 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f884b72-224b-4a4f-b3d5-a6e47b56e90c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.948986 4793 generic.go:334] "Generic (PLEG): container finished" podID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerID="e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42" exitCode=0 Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.949049 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lnhh" event={"ID":"0f884b72-224b-4a4f-b3d5-a6e47b56e90c","Type":"ContainerDied","Data":"e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42"} Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.949090 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5lnhh" event={"ID":"0f884b72-224b-4a4f-b3d5-a6e47b56e90c","Type":"ContainerDied","Data":"b942449e2abce4deb24a763ae5d5ffe0d1cc744d773837a5f93ea077ca725017"} Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.949109 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5lnhh" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.949118 4793 scope.go:117] "RemoveContainer" containerID="e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42" Feb 17 21:51:25 crc kubenswrapper[4793]: I0217 21:51:25.981184 4793 scope.go:117] "RemoveContainer" containerID="e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.007967 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5lnhh"] Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.021727 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5lnhh"] Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.024247 4793 scope.go:117] "RemoveContainer" containerID="4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.102895 4793 scope.go:117] "RemoveContainer" containerID="e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42" Feb 17 21:51:26 crc kubenswrapper[4793]: E0217 21:51:26.103563 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42\": container with ID starting with e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42 not found: ID does not exist" containerID="e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.103639 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42"} err="failed to get container status \"e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42\": rpc error: code = NotFound desc = could not find container \"e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42\": container with ID starting with e63235e5b98c4a92afc234aac7f4a78dba5d7501695fabd6df36d82e545fad42 not found: ID does not exist" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.103805 4793 scope.go:117] "RemoveContainer" containerID="e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7" Feb 17 21:51:26 crc kubenswrapper[4793]: E0217 21:51:26.104416 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7\": container with ID starting with e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7 not found: ID does not exist" containerID="e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.104455 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7"} err="failed to get container status \"e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7\": rpc error: code = NotFound desc = could not find container \"e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7\": container with ID starting with e8d7f6118e87226444e86779943a5ee1035d41d1ae21a6845e604d9d3de6e8a7 not found: ID does not exist" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.104481 4793 scope.go:117] "RemoveContainer" containerID="4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9" Feb 17 21:51:26 crc kubenswrapper[4793]: E0217 21:51:26.105813 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9\": container with ID starting with 4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9 not found: ID does not exist" containerID="4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.105867 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9"} err="failed to get container status \"4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9\": rpc error: code = NotFound desc = could not find container \"4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9\": container with ID starting with 4ad4b1bad0828d75d4ee59830aa6543164c13977678ba26b8552991bcaaf9ef9 not found: ID does not exist" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.857600 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-24q8t"] Feb 17 21:51:26 crc kubenswrapper[4793]: E0217 21:51:26.858156 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="extract-content" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.858178 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="extract-content" Feb 17 21:51:26 crc kubenswrapper[4793]: E0217 21:51:26.858207 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="registry-server" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.858217 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="registry-server" Feb 17 21:51:26 crc kubenswrapper[4793]: E0217 21:51:26.858258 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="extract-utilities" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.858267 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="extract-utilities" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.858574 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" containerName="registry-server" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.860434 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.880304 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24q8t"] Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.961367 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htqxs\" (UniqueName: \"kubernetes.io/projected/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-kube-api-access-htqxs\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.961495 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-utilities\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:26 crc kubenswrapper[4793]: I0217 21:51:26.961599 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-catalog-content\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.062878 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htqxs\" (UniqueName: \"kubernetes.io/projected/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-kube-api-access-htqxs\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.063305 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-utilities\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.063631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-catalog-content\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.063987 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-utilities\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.064034 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-catalog-content\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.081539 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htqxs\" (UniqueName: \"kubernetes.io/projected/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-kube-api-access-htqxs\") pod \"redhat-operators-24q8t\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.184429 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.549171 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f884b72-224b-4a4f-b3d5-a6e47b56e90c" path="/var/lib/kubelet/pods/0f884b72-224b-4a4f-b3d5-a6e47b56e90c/volumes" Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.683112 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24q8t"] Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.973343 4793 generic.go:334] "Generic (PLEG): container finished" podID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerID="a187c0995bff0a358c34b2d4bc8159f05f9860896d09af258c4f1b187b6bced4" exitCode=0 Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.973473 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerDied","Data":"a187c0995bff0a358c34b2d4bc8159f05f9860896d09af258c4f1b187b6bced4"} Feb 17 21:51:27 crc kubenswrapper[4793]: I0217 21:51:27.973715 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerStarted","Data":"4b80ad71fbc3a30d90657f5aa61b3c45bef521d7bcfc865c8f484b6442004999"} Feb 17 21:51:28 crc kubenswrapper[4793]: I0217 21:51:28.985847 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerStarted","Data":"52d871476fbef0a429d554ab03522ae3b949e5ba8500dc06513fb6b9225bc376"} Feb 17 21:51:31 crc kubenswrapper[4793]: I0217 21:51:31.015887 4793 generic.go:334] "Generic (PLEG): container finished" podID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerID="52d871476fbef0a429d554ab03522ae3b949e5ba8500dc06513fb6b9225bc376" exitCode=0 Feb 17 21:51:31 crc kubenswrapper[4793]: I0217 21:51:31.016005 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerDied","Data":"52d871476fbef0a429d554ab03522ae3b949e5ba8500dc06513fb6b9225bc376"} Feb 17 21:51:32 crc kubenswrapper[4793]: I0217 21:51:32.031018 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerStarted","Data":"55458febedc98d114e6e1bc113b5c8b015f498bc89651c78b2636d895817258b"} Feb 17 21:51:32 crc kubenswrapper[4793]: I0217 21:51:32.053975 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-24q8t" podStartSLOduration=2.624997685 podStartE2EDuration="6.053949903s" podCreationTimestamp="2026-02-17 21:51:26 +0000 UTC" firstStartedPulling="2026-02-17 21:51:27.976962786 +0000 UTC m=+6163.268661097" lastFinishedPulling="2026-02-17 21:51:31.405914984 +0000 UTC m=+6166.697613315" observedRunningTime="2026-02-17 21:51:32.048101849 +0000 UTC m=+6167.339800210" watchObservedRunningTime="2026-02-17 21:51:32.053949903 +0000 UTC m=+6167.345648254" Feb 17 21:51:37 crc kubenswrapper[4793]: I0217 21:51:37.185932 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:37 crc kubenswrapper[4793]: I0217 21:51:37.186533 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:37 crc kubenswrapper[4793]: I0217 21:51:37.539074 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:51:37 crc kubenswrapper[4793]: E0217 21:51:37.540077 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:38 crc kubenswrapper[4793]: I0217 21:51:38.249883 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-24q8t" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="registry-server" probeResult="failure" output=< Feb 17 21:51:38 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 21:51:38 crc kubenswrapper[4793]: > Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.510097 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4z78k"] Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.516319 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z78k"] Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.516411 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.623838 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-catalog-content\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.623932 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-utilities\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.623985 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b24ps\" (UniqueName: \"kubernetes.io/projected/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-kube-api-access-b24ps\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.726539 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-utilities\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.726648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b24ps\" (UniqueName: \"kubernetes.io/projected/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-kube-api-access-b24ps\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.726922 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-catalog-content\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.727207 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-utilities\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.727655 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-catalog-content\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.762618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b24ps\" (UniqueName: \"kubernetes.io/projected/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-kube-api-access-b24ps\") pod \"community-operators-4z78k\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:45 crc kubenswrapper[4793]: I0217 21:51:45.844907 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:46 crc kubenswrapper[4793]: I0217 21:51:46.372836 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z78k"] Feb 17 21:51:46 crc kubenswrapper[4793]: W0217 21:51:46.376476 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda04d13ca_9f54_4f0b_9d23_d16984e7d7e2.slice/crio-43b2017f1c083f7b03e6a2c2684db8c0f58ff2474a040e584bcf9e935efb7748 WatchSource:0}: Error finding container 43b2017f1c083f7b03e6a2c2684db8c0f58ff2474a040e584bcf9e935efb7748: Status 404 returned error can't find the container with id 43b2017f1c083f7b03e6a2c2684db8c0f58ff2474a040e584bcf9e935efb7748 Feb 17 21:51:47 crc kubenswrapper[4793]: I0217 21:51:47.213610 4793 generic.go:334] "Generic (PLEG): container finished" podID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerID="6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d" exitCode=0 Feb 17 21:51:47 crc kubenswrapper[4793]: I0217 21:51:47.213811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerDied","Data":"6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d"} Feb 17 21:51:47 crc kubenswrapper[4793]: I0217 21:51:47.213948 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerStarted","Data":"43b2017f1c083f7b03e6a2c2684db8c0f58ff2474a040e584bcf9e935efb7748"} Feb 17 21:51:47 crc kubenswrapper[4793]: I0217 21:51:47.216392 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:51:47 crc kubenswrapper[4793]: I0217 21:51:47.273439 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:47 crc kubenswrapper[4793]: I0217 21:51:47.351579 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:48 crc kubenswrapper[4793]: I0217 21:51:48.539540 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:51:48 crc kubenswrapper[4793]: E0217 21:51:48.541715 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:49 crc kubenswrapper[4793]: I0217 21:51:49.240530 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerStarted","Data":"6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70"} Feb 17 21:51:49 crc kubenswrapper[4793]: I0217 21:51:49.644133 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24q8t"] Feb 17 21:51:49 crc kubenswrapper[4793]: I0217 21:51:49.644765 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-24q8t" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="registry-server" containerID="cri-o://55458febedc98d114e6e1bc113b5c8b015f498bc89651c78b2636d895817258b" gracePeriod=2 Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.256428 4793 generic.go:334] "Generic (PLEG): container finished" podID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerID="6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70" exitCode=0 Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.256467 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerDied","Data":"6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70"} Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.256832 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerStarted","Data":"f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074"} Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.260738 4793 generic.go:334] "Generic (PLEG): container finished" podID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerID="55458febedc98d114e6e1bc113b5c8b015f498bc89651c78b2636d895817258b" exitCode=0 Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.260771 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerDied","Data":"55458febedc98d114e6e1bc113b5c8b015f498bc89651c78b2636d895817258b"} Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.301368 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4z78k" podStartSLOduration=2.537532107 podStartE2EDuration="5.301339004s" podCreationTimestamp="2026-02-17 21:51:45 +0000 UTC" firstStartedPulling="2026-02-17 21:51:47.216177903 +0000 UTC m=+6182.507876214" lastFinishedPulling="2026-02-17 21:51:49.97998478 +0000 UTC m=+6185.271683111" observedRunningTime="2026-02-17 21:51:50.282760126 +0000 UTC m=+6185.574458477" watchObservedRunningTime="2026-02-17 21:51:50.301339004 +0000 UTC m=+6185.593037355" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.720911 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.765555 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-utilities\") pod \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.765844 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-catalog-content\") pod \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.765882 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htqxs\" (UniqueName: \"kubernetes.io/projected/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-kube-api-access-htqxs\") pod \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\" (UID: \"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00\") " Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.767602 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-utilities" (OuterVolumeSpecName: "utilities") pod "97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" (UID: "97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.780113 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-kube-api-access-htqxs" (OuterVolumeSpecName: "kube-api-access-htqxs") pod "97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" (UID: "97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00"). InnerVolumeSpecName "kube-api-access-htqxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.870134 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htqxs\" (UniqueName: \"kubernetes.io/projected/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-kube-api-access-htqxs\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.870178 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.884383 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" (UID: "97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:51:50 crc kubenswrapper[4793]: I0217 21:51:50.975119 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.272024 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24q8t" event={"ID":"97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00","Type":"ContainerDied","Data":"4b80ad71fbc3a30d90657f5aa61b3c45bef521d7bcfc865c8f484b6442004999"} Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.272075 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24q8t" Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.272110 4793 scope.go:117] "RemoveContainer" containerID="55458febedc98d114e6e1bc113b5c8b015f498bc89651c78b2636d895817258b" Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.299396 4793 scope.go:117] "RemoveContainer" containerID="52d871476fbef0a429d554ab03522ae3b949e5ba8500dc06513fb6b9225bc376" Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.309649 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24q8t"] Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.321220 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-24q8t"] Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.332892 4793 scope.go:117] "RemoveContainer" containerID="a187c0995bff0a358c34b2d4bc8159f05f9860896d09af258c4f1b187b6bced4" Feb 17 21:51:51 crc kubenswrapper[4793]: I0217 21:51:51.550270 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" path="/var/lib/kubelet/pods/97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00/volumes" Feb 17 21:51:55 crc kubenswrapper[4793]: I0217 21:51:55.845818 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:55 crc kubenswrapper[4793]: I0217 21:51:55.846795 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:55 crc kubenswrapper[4793]: I0217 21:51:55.897309 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:56 crc kubenswrapper[4793]: I0217 21:51:56.398580 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:56 crc kubenswrapper[4793]: I0217 21:51:56.478810 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z78k"] Feb 17 21:51:58 crc kubenswrapper[4793]: I0217 21:51:58.355060 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4z78k" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="registry-server" containerID="cri-o://f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074" gracePeriod=2 Feb 17 21:51:58 crc kubenswrapper[4793]: I0217 21:51:58.889735 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.072119 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-utilities\") pod \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.072398 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b24ps\" (UniqueName: \"kubernetes.io/projected/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-kube-api-access-b24ps\") pod \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.072560 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-catalog-content\") pod \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\" (UID: \"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2\") " Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.073561 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-utilities" (OuterVolumeSpecName: "utilities") pod "a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" (UID: "a04d13ca-9f54-4f0b-9d23-d16984e7d7e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.082894 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-kube-api-access-b24ps" (OuterVolumeSpecName: "kube-api-access-b24ps") pod "a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" (UID: "a04d13ca-9f54-4f0b-9d23-d16984e7d7e2"). InnerVolumeSpecName "kube-api-access-b24ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.175569 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b24ps\" (UniqueName: \"kubernetes.io/projected/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-kube-api-access-b24ps\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.175623 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.180330 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" (UID: "a04d13ca-9f54-4f0b-9d23-d16984e7d7e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.278658 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.371194 4793 generic.go:334] "Generic (PLEG): container finished" podID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerID="f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074" exitCode=0 Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.371243 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerDied","Data":"f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074"} Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.371287 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z78k" event={"ID":"a04d13ca-9f54-4f0b-9d23-d16984e7d7e2","Type":"ContainerDied","Data":"43b2017f1c083f7b03e6a2c2684db8c0f58ff2474a040e584bcf9e935efb7748"} Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.371308 4793 scope.go:117] "RemoveContainer" containerID="f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.371955 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z78k" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.418179 4793 scope.go:117] "RemoveContainer" containerID="6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.433323 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z78k"] Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.444445 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4z78k"] Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.480358 4793 scope.go:117] "RemoveContainer" containerID="6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.516686 4793 scope.go:117] "RemoveContainer" containerID="f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074" Feb 17 21:51:59 crc kubenswrapper[4793]: E0217 21:51:59.517220 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074\": container with ID starting with f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074 not found: ID does not exist" containerID="f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.517251 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074"} err="failed to get container status \"f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074\": rpc error: code = NotFound desc = could not find container \"f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074\": container with ID starting with f10a39bbf0503579d38be1b07961cecc1db992098a921161ca65bfc0e1a35074 not found: ID does not exist" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.517271 4793 scope.go:117] "RemoveContainer" containerID="6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70" Feb 17 21:51:59 crc kubenswrapper[4793]: E0217 21:51:59.517640 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70\": container with ID starting with 6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70 not found: ID does not exist" containerID="6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.517661 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70"} err="failed to get container status \"6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70\": rpc error: code = NotFound desc = could not find container \"6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70\": container with ID starting with 6c08cdf54ee290ce60c46d317330f6b8692399f526014c31ed4dccba67026e70 not found: ID does not exist" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.517672 4793 scope.go:117] "RemoveContainer" containerID="6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d" Feb 17 21:51:59 crc kubenswrapper[4793]: E0217 21:51:59.518104 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d\": container with ID starting with 6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d not found: ID does not exist" containerID="6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.518124 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d"} err="failed to get container status \"6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d\": rpc error: code = NotFound desc = could not find container \"6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d\": container with ID starting with 6f497f01856297995d01d1b9a8d2e1f09435d86521145b31f4965d06e6fbfe6d not found: ID does not exist" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.538748 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:51:59 crc kubenswrapper[4793]: E0217 21:51:59.539248 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:51:59 crc kubenswrapper[4793]: I0217 21:51:59.549056 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" path="/var/lib/kubelet/pods/a04d13ca-9f54-4f0b-9d23-d16984e7d7e2/volumes" Feb 17 21:52:12 crc kubenswrapper[4793]: I0217 21:52:12.538315 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:52:12 crc kubenswrapper[4793]: E0217 21:52:12.539079 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:52:27 crc kubenswrapper[4793]: I0217 21:52:27.539199 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:52:27 crc kubenswrapper[4793]: E0217 21:52:27.540003 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:52:39 crc kubenswrapper[4793]: I0217 21:52:39.539471 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:52:39 crc kubenswrapper[4793]: E0217 21:52:39.540193 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:52:50 crc kubenswrapper[4793]: I0217 21:52:50.102319 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:52:50 crc kubenswrapper[4793]: I0217 21:52:50.103059 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:52:53 crc kubenswrapper[4793]: I0217 21:52:53.153464 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:52:53 crc kubenswrapper[4793]: E0217 21:52:53.159330 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:53:06 crc kubenswrapper[4793]: I0217 21:53:06.539138 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:53:06 crc kubenswrapper[4793]: E0217 21:53:06.540287 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:53:20 crc kubenswrapper[4793]: I0217 21:53:20.111041 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:53:20 crc kubenswrapper[4793]: I0217 21:53:20.111938 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:53:21 crc kubenswrapper[4793]: I0217 21:53:21.540090 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:53:21 crc kubenswrapper[4793]: E0217 21:53:21.541062 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:53:34 crc kubenswrapper[4793]: I0217 21:53:34.539820 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:53:34 crc kubenswrapper[4793]: E0217 21:53:34.540551 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:53:49 crc kubenswrapper[4793]: I0217 21:53:49.548519 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:53:49 crc kubenswrapper[4793]: E0217 21:53:49.549448 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.102133 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.102261 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.102374 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.103387 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.103522 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" gracePeriod=600 Feb 17 21:53:50 crc kubenswrapper[4793]: E0217 21:53:50.258219 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.640677 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" exitCode=0 Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.640721 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd"} Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.640823 4793 scope.go:117] "RemoveContainer" containerID="061214a44e3d256b369b9ba25b7f8b854564eeba298c45a9ca3074abd411552b" Feb 17 21:53:50 crc kubenswrapper[4793]: I0217 21:53:50.641479 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:53:50 crc kubenswrapper[4793]: E0217 21:53:50.641782 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:54:02 crc kubenswrapper[4793]: I0217 21:54:02.547592 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:54:02 crc kubenswrapper[4793]: E0217 21:54:02.548474 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:54:04 crc kubenswrapper[4793]: I0217 21:54:04.539125 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:54:04 crc kubenswrapper[4793]: E0217 21:54:04.539846 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:54:16 crc kubenswrapper[4793]: I0217 21:54:16.539622 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:54:16 crc kubenswrapper[4793]: E0217 21:54:16.540913 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:54:17 crc kubenswrapper[4793]: I0217 21:54:17.539021 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:54:17 crc kubenswrapper[4793]: E0217 21:54:17.539516 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:54:30 crc kubenswrapper[4793]: I0217 21:54:30.538765 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:54:30 crc kubenswrapper[4793]: E0217 21:54:30.540069 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:54:31 crc kubenswrapper[4793]: I0217 21:54:31.539508 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:54:31 crc kubenswrapper[4793]: E0217 21:54:31.540085 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:54:44 crc kubenswrapper[4793]: I0217 21:54:44.538791 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:54:44 crc kubenswrapper[4793]: E0217 21:54:44.539580 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:54:45 crc kubenswrapper[4793]: I0217 21:54:45.574883 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:54:45 crc kubenswrapper[4793]: E0217 21:54:45.575775 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:54:56 crc kubenswrapper[4793]: I0217 21:54:56.540806 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:54:56 crc kubenswrapper[4793]: E0217 21:54:56.541957 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:54:59 crc kubenswrapper[4793]: I0217 21:54:59.540657 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:54:59 crc kubenswrapper[4793]: E0217 21:54:59.544759 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:55:07 crc kubenswrapper[4793]: I0217 21:55:07.539274 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:55:07 crc kubenswrapper[4793]: E0217 21:55:07.540599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:55:10 crc kubenswrapper[4793]: I0217 21:55:10.539611 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:55:10 crc kubenswrapper[4793]: E0217 21:55:10.540235 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:55:22 crc kubenswrapper[4793]: I0217 21:55:22.538961 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:55:22 crc kubenswrapper[4793]: I0217 21:55:22.539753 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:55:22 crc kubenswrapper[4793]: E0217 21:55:22.539780 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:55:22 crc kubenswrapper[4793]: E0217 21:55:22.540076 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:55:36 crc kubenswrapper[4793]: I0217 21:55:36.543207 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:55:36 crc kubenswrapper[4793]: E0217 21:55:36.544349 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:55:37 crc kubenswrapper[4793]: I0217 21:55:37.538301 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:55:37 crc kubenswrapper[4793]: E0217 21:55:37.538825 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:55:48 crc kubenswrapper[4793]: I0217 21:55:48.539247 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:55:48 crc kubenswrapper[4793]: E0217 21:55:48.540170 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:55:51 crc kubenswrapper[4793]: I0217 21:55:51.539995 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:55:51 crc kubenswrapper[4793]: E0217 21:55:51.541202 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:55:59 crc kubenswrapper[4793]: I0217 21:55:59.539676 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:55:59 crc kubenswrapper[4793]: E0217 21:55:59.540824 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:56:06 crc kubenswrapper[4793]: I0217 21:56:06.539351 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:56:06 crc kubenswrapper[4793]: E0217 21:56:06.540606 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:56:10 crc kubenswrapper[4793]: I0217 21:56:10.539565 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:56:10 crc kubenswrapper[4793]: E0217 21:56:10.541123 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:56:21 crc kubenswrapper[4793]: I0217 21:56:21.538317 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:56:21 crc kubenswrapper[4793]: E0217 21:56:21.540260 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:56:22 crc kubenswrapper[4793]: I0217 21:56:22.539194 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:56:23 crc kubenswrapper[4793]: I0217 21:56:23.399209 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc"} Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.420126 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" exitCode=1 Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.420156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc"} Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.420492 4793 scope.go:117] "RemoveContainer" containerID="84191ee9c31f66fed4b27c7281450d457623d660534b22845553024c956eb6b5" Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.421316 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:56:25 crc kubenswrapper[4793]: E0217 21:56:25.421614 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.596199 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.596586 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.596615 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:56:25 crc kubenswrapper[4793]: I0217 21:56:25.596635 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 21:56:26 crc kubenswrapper[4793]: I0217 21:56:26.435176 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:56:26 crc kubenswrapper[4793]: E0217 21:56:26.435601 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:56:35 crc kubenswrapper[4793]: I0217 21:56:35.551672 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:56:35 crc kubenswrapper[4793]: E0217 21:56:35.552433 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:56:39 crc kubenswrapper[4793]: I0217 21:56:39.539395 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:56:39 crc kubenswrapper[4793]: E0217 21:56:39.540280 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.889312 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-skl7k"] Feb 17 21:56:46 crc kubenswrapper[4793]: E0217 21:56:46.890236 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="extract-content" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890250 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="extract-content" Feb 17 21:56:46 crc kubenswrapper[4793]: E0217 21:56:46.890265 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="registry-server" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890271 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="registry-server" Feb 17 21:56:46 crc kubenswrapper[4793]: E0217 21:56:46.890286 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="extract-content" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890292 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="extract-content" Feb 17 21:56:46 crc kubenswrapper[4793]: E0217 21:56:46.890302 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="extract-utilities" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890307 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="extract-utilities" Feb 17 21:56:46 crc kubenswrapper[4793]: E0217 21:56:46.890322 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="registry-server" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890329 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="registry-server" Feb 17 21:56:46 crc kubenswrapper[4793]: E0217 21:56:46.890344 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="extract-utilities" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890350 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="extract-utilities" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890539 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a0ae80-bcc7-4057-9f8e-f7f3f1ab6c00" containerName="registry-server" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.890558 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04d13ca-9f54-4f0b-9d23-d16984e7d7e2" containerName="registry-server" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.892001 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:46 crc kubenswrapper[4793]: I0217 21:56:46.902543 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skl7k"] Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.034401 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-catalog-content\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.034705 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hqh\" (UniqueName: \"kubernetes.io/projected/9320fc5d-ca24-4949-ba94-e942f697b634-kube-api-access-v4hqh\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.034941 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-utilities\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.137263 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-utilities\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.137660 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-catalog-content\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.137824 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-utilities\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.137834 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4hqh\" (UniqueName: \"kubernetes.io/projected/9320fc5d-ca24-4949-ba94-e942f697b634-kube-api-access-v4hqh\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.138079 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-catalog-content\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.160015 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4hqh\" (UniqueName: \"kubernetes.io/projected/9320fc5d-ca24-4949-ba94-e942f697b634-kube-api-access-v4hqh\") pod \"redhat-marketplace-skl7k\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.208467 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.539369 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:56:47 crc kubenswrapper[4793]: E0217 21:56:47.540262 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:56:47 crc kubenswrapper[4793]: I0217 21:56:47.652202 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skl7k"] Feb 17 21:56:48 crc kubenswrapper[4793]: I0217 21:56:48.669036 4793 generic.go:334] "Generic (PLEG): container finished" podID="9320fc5d-ca24-4949-ba94-e942f697b634" containerID="e30bb126218972e4bdaf1af4b7de08a1a6ece155d55baa09377cb5b01f435d0a" exitCode=0 Feb 17 21:56:48 crc kubenswrapper[4793]: I0217 21:56:48.669351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerDied","Data":"e30bb126218972e4bdaf1af4b7de08a1a6ece155d55baa09377cb5b01f435d0a"} Feb 17 21:56:48 crc kubenswrapper[4793]: I0217 21:56:48.669527 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerStarted","Data":"402e0acd66e59866fe70b58143120d296784ad48e6813637a1b80aeeaa2a0a49"} Feb 17 21:56:48 crc kubenswrapper[4793]: I0217 21:56:48.673110 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 21:56:49 crc kubenswrapper[4793]: I0217 21:56:49.687834 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerStarted","Data":"408f16dec6413cad36a18ecd6fdba91e4907a38bdbe804435a8b2e38307206de"} Feb 17 21:56:50 crc kubenswrapper[4793]: I0217 21:56:50.717076 4793 generic.go:334] "Generic (PLEG): container finished" podID="9320fc5d-ca24-4949-ba94-e942f697b634" containerID="408f16dec6413cad36a18ecd6fdba91e4907a38bdbe804435a8b2e38307206de" exitCode=0 Feb 17 21:56:50 crc kubenswrapper[4793]: I0217 21:56:50.717877 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerDied","Data":"408f16dec6413cad36a18ecd6fdba91e4907a38bdbe804435a8b2e38307206de"} Feb 17 21:56:51 crc kubenswrapper[4793]: I0217 21:56:51.538995 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:56:51 crc kubenswrapper[4793]: E0217 21:56:51.539539 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:56:51 crc kubenswrapper[4793]: I0217 21:56:51.730075 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerStarted","Data":"7c4d335bab52cd16981c0883c7e84d20409fe9b97e0d4d7e52f1ac131d86bb27"} Feb 17 21:56:51 crc kubenswrapper[4793]: I0217 21:56:51.752186 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-skl7k" podStartSLOduration=3.280017486 podStartE2EDuration="5.752162242s" podCreationTimestamp="2026-02-17 21:56:46 +0000 UTC" firstStartedPulling="2026-02-17 21:56:48.672576498 +0000 UTC m=+6483.964274849" lastFinishedPulling="2026-02-17 21:56:51.144721264 +0000 UTC m=+6486.436419605" observedRunningTime="2026-02-17 21:56:51.746938973 +0000 UTC m=+6487.038637304" watchObservedRunningTime="2026-02-17 21:56:51.752162242 +0000 UTC m=+6487.043860583" Feb 17 21:56:57 crc kubenswrapper[4793]: I0217 21:56:57.209591 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:57 crc kubenswrapper[4793]: I0217 21:56:57.210422 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:57 crc kubenswrapper[4793]: I0217 21:56:57.275791 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:56:57 crc kubenswrapper[4793]: I0217 21:56:57.879333 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:57:00 crc kubenswrapper[4793]: I0217 21:57:00.539324 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:57:00 crc kubenswrapper[4793]: E0217 21:57:00.539909 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:57:00 crc kubenswrapper[4793]: I0217 21:57:00.682504 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skl7k"] Feb 17 21:57:00 crc kubenswrapper[4793]: I0217 21:57:00.683465 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-skl7k" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="registry-server" containerID="cri-o://7c4d335bab52cd16981c0883c7e84d20409fe9b97e0d4d7e52f1ac131d86bb27" gracePeriod=2 Feb 17 21:57:00 crc kubenswrapper[4793]: I0217 21:57:00.868169 4793 generic.go:334] "Generic (PLEG): container finished" podID="9320fc5d-ca24-4949-ba94-e942f697b634" containerID="7c4d335bab52cd16981c0883c7e84d20409fe9b97e0d4d7e52f1ac131d86bb27" exitCode=0 Feb 17 21:57:00 crc kubenswrapper[4793]: I0217 21:57:00.868421 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerDied","Data":"7c4d335bab52cd16981c0883c7e84d20409fe9b97e0d4d7e52f1ac131d86bb27"} Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.324724 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.485368 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-utilities\") pod \"9320fc5d-ca24-4949-ba94-e942f697b634\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.485650 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4hqh\" (UniqueName: \"kubernetes.io/projected/9320fc5d-ca24-4949-ba94-e942f697b634-kube-api-access-v4hqh\") pod \"9320fc5d-ca24-4949-ba94-e942f697b634\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.486002 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-catalog-content\") pod \"9320fc5d-ca24-4949-ba94-e942f697b634\" (UID: \"9320fc5d-ca24-4949-ba94-e942f697b634\") " Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.486308 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-utilities" (OuterVolumeSpecName: "utilities") pod "9320fc5d-ca24-4949-ba94-e942f697b634" (UID: "9320fc5d-ca24-4949-ba94-e942f697b634"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.486965 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.495880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9320fc5d-ca24-4949-ba94-e942f697b634-kube-api-access-v4hqh" (OuterVolumeSpecName: "kube-api-access-v4hqh") pod "9320fc5d-ca24-4949-ba94-e942f697b634" (UID: "9320fc5d-ca24-4949-ba94-e942f697b634"). InnerVolumeSpecName "kube-api-access-v4hqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.517950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9320fc5d-ca24-4949-ba94-e942f697b634" (UID: "9320fc5d-ca24-4949-ba94-e942f697b634"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.589387 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9320fc5d-ca24-4949-ba94-e942f697b634-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.589666 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4hqh\" (UniqueName: \"kubernetes.io/projected/9320fc5d-ca24-4949-ba94-e942f697b634-kube-api-access-v4hqh\") on node \"crc\" DevicePath \"\"" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.878544 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skl7k" event={"ID":"9320fc5d-ca24-4949-ba94-e942f697b634","Type":"ContainerDied","Data":"402e0acd66e59866fe70b58143120d296784ad48e6813637a1b80aeeaa2a0a49"} Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.878601 4793 scope.go:117] "RemoveContainer" containerID="7c4d335bab52cd16981c0883c7e84d20409fe9b97e0d4d7e52f1ac131d86bb27" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.878634 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skl7k" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.907024 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skl7k"] Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.912799 4793 scope.go:117] "RemoveContainer" containerID="408f16dec6413cad36a18ecd6fdba91e4907a38bdbe804435a8b2e38307206de" Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.920601 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-skl7k"] Feb 17 21:57:01 crc kubenswrapper[4793]: I0217 21:57:01.937726 4793 scope.go:117] "RemoveContainer" containerID="e30bb126218972e4bdaf1af4b7de08a1a6ece155d55baa09377cb5b01f435d0a" Feb 17 21:57:03 crc kubenswrapper[4793]: I0217 21:57:03.584500 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" path="/var/lib/kubelet/pods/9320fc5d-ca24-4949-ba94-e942f697b634/volumes" Feb 17 21:57:05 crc kubenswrapper[4793]: I0217 21:57:05.546248 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:57:05 crc kubenswrapper[4793]: E0217 21:57:05.546856 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:57:13 crc kubenswrapper[4793]: I0217 21:57:13.538943 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:57:13 crc kubenswrapper[4793]: E0217 21:57:13.539970 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:57:20 crc kubenswrapper[4793]: I0217 21:57:20.538589 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:57:20 crc kubenswrapper[4793]: E0217 21:57:20.539768 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:57:26 crc kubenswrapper[4793]: I0217 21:57:26.540007 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:57:26 crc kubenswrapper[4793]: E0217 21:57:26.540917 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:57:35 crc kubenswrapper[4793]: I0217 21:57:35.546091 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:57:35 crc kubenswrapper[4793]: E0217 21:57:35.547099 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:57:38 crc kubenswrapper[4793]: I0217 21:57:38.539653 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:57:38 crc kubenswrapper[4793]: E0217 21:57:38.541961 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:57:48 crc kubenswrapper[4793]: I0217 21:57:48.538851 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:57:48 crc kubenswrapper[4793]: E0217 21:57:48.541471 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:57:49 crc kubenswrapper[4793]: I0217 21:57:49.539449 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:57:49 crc kubenswrapper[4793]: E0217 21:57:49.539849 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:58:03 crc kubenswrapper[4793]: I0217 21:58:03.539423 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:58:03 crc kubenswrapper[4793]: I0217 21:58:03.540229 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:58:03 crc kubenswrapper[4793]: E0217 21:58:03.540509 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:58:03 crc kubenswrapper[4793]: E0217 21:58:03.540727 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:58:18 crc kubenswrapper[4793]: I0217 21:58:18.539502 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:58:18 crc kubenswrapper[4793]: I0217 21:58:18.540326 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:58:18 crc kubenswrapper[4793]: E0217 21:58:18.540847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:58:18 crc kubenswrapper[4793]: E0217 21:58:18.541010 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:58:30 crc kubenswrapper[4793]: I0217 21:58:30.539645 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:58:30 crc kubenswrapper[4793]: E0217 21:58:30.541335 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:58:33 crc kubenswrapper[4793]: I0217 21:58:33.538659 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:58:33 crc kubenswrapper[4793]: E0217 21:58:33.539430 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:58:43 crc kubenswrapper[4793]: I0217 21:58:43.540349 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:58:43 crc kubenswrapper[4793]: E0217 21:58:43.541584 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 21:58:44 crc kubenswrapper[4793]: I0217 21:58:44.538566 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:58:44 crc kubenswrapper[4793]: E0217 21:58:44.539121 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:58:55 crc kubenswrapper[4793]: I0217 21:58:55.550462 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:58:55 crc kubenswrapper[4793]: E0217 21:58:55.551361 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:58:58 crc kubenswrapper[4793]: I0217 21:58:58.539780 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 21:58:59 crc kubenswrapper[4793]: I0217 21:58:59.405980 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"7855af0e308564a81edffdb4a3f30cd83142c30e1fb209a1cc316f45f96def57"} Feb 17 21:59:08 crc kubenswrapper[4793]: I0217 21:59:08.539476 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:59:08 crc kubenswrapper[4793]: E0217 21:59:08.540384 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:59:19 crc kubenswrapper[4793]: I0217 21:59:19.539772 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:59:19 crc kubenswrapper[4793]: E0217 21:59:19.540865 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:59:31 crc kubenswrapper[4793]: I0217 21:59:31.539782 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:59:31 crc kubenswrapper[4793]: E0217 21:59:31.540836 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:59:43 crc kubenswrapper[4793]: I0217 21:59:43.540617 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:59:43 crc kubenswrapper[4793]: E0217 21:59:43.541751 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 21:59:57 crc kubenswrapper[4793]: I0217 21:59:57.539302 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 21:59:57 crc kubenswrapper[4793]: E0217 21:59:57.540454 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.194294 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s"] Feb 17 22:00:00 crc kubenswrapper[4793]: E0217 22:00:00.195516 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="extract-utilities" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.195540 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="extract-utilities" Feb 17 22:00:00 crc kubenswrapper[4793]: E0217 22:00:00.195560 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="extract-content" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.195572 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="extract-content" Feb 17 22:00:00 crc kubenswrapper[4793]: E0217 22:00:00.195591 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="registry-server" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.195602 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="registry-server" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.195966 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9320fc5d-ca24-4949-ba94-e942f697b634" containerName="registry-server" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.197137 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.199475 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.200457 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.204097 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s"] Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.284879 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7mg\" (UniqueName: \"kubernetes.io/projected/2b9ee751-ab7e-454d-9b2c-5d87483286cb-kube-api-access-cm7mg\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.284933 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9ee751-ab7e-454d-9b2c-5d87483286cb-config-volume\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.284963 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b9ee751-ab7e-454d-9b2c-5d87483286cb-secret-volume\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.386164 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7mg\" (UniqueName: \"kubernetes.io/projected/2b9ee751-ab7e-454d-9b2c-5d87483286cb-kube-api-access-cm7mg\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.386228 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9ee751-ab7e-454d-9b2c-5d87483286cb-config-volume\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.386261 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b9ee751-ab7e-454d-9b2c-5d87483286cb-secret-volume\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.387141 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9ee751-ab7e-454d-9b2c-5d87483286cb-config-volume\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.393998 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b9ee751-ab7e-454d-9b2c-5d87483286cb-secret-volume\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.405570 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7mg\" (UniqueName: \"kubernetes.io/projected/2b9ee751-ab7e-454d-9b2c-5d87483286cb-kube-api-access-cm7mg\") pod \"collect-profiles-29522760-p9f5s\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:00 crc kubenswrapper[4793]: I0217 22:00:00.536242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:01 crc kubenswrapper[4793]: I0217 22:00:01.029845 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s"] Feb 17 22:00:01 crc kubenswrapper[4793]: I0217 22:00:01.134369 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" event={"ID":"2b9ee751-ab7e-454d-9b2c-5d87483286cb","Type":"ContainerStarted","Data":"32c51c31388f12dd5145d7b607bd0296d1f59aa14a50d5b84b86dcf502cecde5"} Feb 17 22:00:02 crc kubenswrapper[4793]: I0217 22:00:02.148606 4793 generic.go:334] "Generic (PLEG): container finished" podID="2b9ee751-ab7e-454d-9b2c-5d87483286cb" containerID="4227f383a2b7df4a92c54e6b8a121533513b8f35ec7fbb83d73a6b12777f11ee" exitCode=0 Feb 17 22:00:02 crc kubenswrapper[4793]: I0217 22:00:02.148670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" event={"ID":"2b9ee751-ab7e-454d-9b2c-5d87483286cb","Type":"ContainerDied","Data":"4227f383a2b7df4a92c54e6b8a121533513b8f35ec7fbb83d73a6b12777f11ee"} Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.615534 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.708422 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm7mg\" (UniqueName: \"kubernetes.io/projected/2b9ee751-ab7e-454d-9b2c-5d87483286cb-kube-api-access-cm7mg\") pod \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.708569 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9ee751-ab7e-454d-9b2c-5d87483286cb-config-volume\") pod \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.708668 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b9ee751-ab7e-454d-9b2c-5d87483286cb-secret-volume\") pod \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\" (UID: \"2b9ee751-ab7e-454d-9b2c-5d87483286cb\") " Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.709510 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b9ee751-ab7e-454d-9b2c-5d87483286cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b9ee751-ab7e-454d-9b2c-5d87483286cb" (UID: "2b9ee751-ab7e-454d-9b2c-5d87483286cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.716627 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9ee751-ab7e-454d-9b2c-5d87483286cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b9ee751-ab7e-454d-9b2c-5d87483286cb" (UID: "2b9ee751-ab7e-454d-9b2c-5d87483286cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.732490 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9ee751-ab7e-454d-9b2c-5d87483286cb-kube-api-access-cm7mg" (OuterVolumeSpecName: "kube-api-access-cm7mg") pod "2b9ee751-ab7e-454d-9b2c-5d87483286cb" (UID: "2b9ee751-ab7e-454d-9b2c-5d87483286cb"). InnerVolumeSpecName "kube-api-access-cm7mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.810510 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm7mg\" (UniqueName: \"kubernetes.io/projected/2b9ee751-ab7e-454d-9b2c-5d87483286cb-kube-api-access-cm7mg\") on node \"crc\" DevicePath \"\"" Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.810541 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b9ee751-ab7e-454d-9b2c-5d87483286cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:00:03 crc kubenswrapper[4793]: I0217 22:00:03.810550 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b9ee751-ab7e-454d-9b2c-5d87483286cb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:00:04 crc kubenswrapper[4793]: I0217 22:00:04.179351 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" event={"ID":"2b9ee751-ab7e-454d-9b2c-5d87483286cb","Type":"ContainerDied","Data":"32c51c31388f12dd5145d7b607bd0296d1f59aa14a50d5b84b86dcf502cecde5"} Feb 17 22:00:04 crc kubenswrapper[4793]: I0217 22:00:04.179401 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c51c31388f12dd5145d7b607bd0296d1f59aa14a50d5b84b86dcf502cecde5" Feb 17 22:00:04 crc kubenswrapper[4793]: I0217 22:00:04.179411 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s" Feb 17 22:00:04 crc kubenswrapper[4793]: I0217 22:00:04.719836 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5"] Feb 17 22:00:04 crc kubenswrapper[4793]: I0217 22:00:04.737221 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522715-cr7c5"] Feb 17 22:00:05 crc kubenswrapper[4793]: I0217 22:00:05.549893 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79511b9-ebbe-407a-8b6e-8d41a4930281" path="/var/lib/kubelet/pods/e79511b9-ebbe-407a-8b6e-8d41a4930281/volumes" Feb 17 22:00:11 crc kubenswrapper[4793]: I0217 22:00:11.539963 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:00:11 crc kubenswrapper[4793]: E0217 22:00:11.540987 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:00:26 crc kubenswrapper[4793]: I0217 22:00:26.009493 4793 scope.go:117] "RemoveContainer" containerID="5676a22a4bc57fe10ad4902cae7411b8376d6c364757e7c6550f907e7a6d0a63" Feb 17 22:00:26 crc kubenswrapper[4793]: I0217 22:00:26.541007 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:00:26 crc kubenswrapper[4793]: E0217 22:00:26.544537 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:00:37 crc kubenswrapper[4793]: I0217 22:00:37.538881 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:00:37 crc kubenswrapper[4793]: E0217 22:00:37.539922 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:00:48 crc kubenswrapper[4793]: I0217 22:00:48.539010 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:00:48 crc kubenswrapper[4793]: E0217 22:00:48.539945 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.146538 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522761-9s9zm"] Feb 17 22:01:00 crc kubenswrapper[4793]: E0217 22:01:00.147504 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9ee751-ab7e-454d-9b2c-5d87483286cb" containerName="collect-profiles" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.147522 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9ee751-ab7e-454d-9b2c-5d87483286cb" containerName="collect-profiles" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.147803 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9ee751-ab7e-454d-9b2c-5d87483286cb" containerName="collect-profiles" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.148664 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.159393 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522761-9s9zm"] Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.236868 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-config-data\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.236914 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-fernet-keys\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.236969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7z8s\" (UniqueName: \"kubernetes.io/projected/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-kube-api-access-x7z8s\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.237055 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-combined-ca-bundle\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.338622 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-combined-ca-bundle\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.338784 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-config-data\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.338830 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-fernet-keys\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.338905 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7z8s\" (UniqueName: \"kubernetes.io/projected/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-kube-api-access-x7z8s\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.344847 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-combined-ca-bundle\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.345101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-fernet-keys\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.350727 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-config-data\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.367029 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7z8s\" (UniqueName: \"kubernetes.io/projected/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-kube-api-access-x7z8s\") pod \"keystone-cron-29522761-9s9zm\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:00 crc kubenswrapper[4793]: I0217 22:01:00.493219 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:01 crc kubenswrapper[4793]: I0217 22:01:01.007931 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522761-9s9zm"] Feb 17 22:01:01 crc kubenswrapper[4793]: I0217 22:01:01.539955 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:01:01 crc kubenswrapper[4793]: E0217 22:01:01.540431 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:01:01 crc kubenswrapper[4793]: I0217 22:01:01.864979 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522761-9s9zm" event={"ID":"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1","Type":"ContainerStarted","Data":"e1324a5e30744dc0f1ce8fe4d54e6fdc40ee544012c24d5ea77ff3dc8370f790"} Feb 17 22:01:01 crc kubenswrapper[4793]: I0217 22:01:01.865038 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522761-9s9zm" event={"ID":"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1","Type":"ContainerStarted","Data":"1d07c4321673ceb862341b38c6d41bf505dc89d18f40ef6c885ac9cd3046b50e"} Feb 17 22:01:01 crc kubenswrapper[4793]: I0217 22:01:01.888354 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522761-9s9zm" podStartSLOduration=1.888333668 podStartE2EDuration="1.888333668s" podCreationTimestamp="2026-02-17 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 22:01:01.883471709 +0000 UTC m=+6737.175170040" watchObservedRunningTime="2026-02-17 22:01:01.888333668 +0000 UTC m=+6737.180031979" Feb 17 22:01:04 crc kubenswrapper[4793]: I0217 22:01:04.906749 4793 generic.go:334] "Generic (PLEG): container finished" podID="d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" containerID="e1324a5e30744dc0f1ce8fe4d54e6fdc40ee544012c24d5ea77ff3dc8370f790" exitCode=0 Feb 17 22:01:04 crc kubenswrapper[4793]: I0217 22:01:04.906897 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522761-9s9zm" event={"ID":"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1","Type":"ContainerDied","Data":"e1324a5e30744dc0f1ce8fe4d54e6fdc40ee544012c24d5ea77ff3dc8370f790"} Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.262336 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.385784 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-fernet-keys\") pod \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.385868 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-config-data\") pod \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.385910 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7z8s\" (UniqueName: \"kubernetes.io/projected/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-kube-api-access-x7z8s\") pod \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.385989 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-combined-ca-bundle\") pod \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\" (UID: \"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1\") " Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.396843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" (UID: "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.397001 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-kube-api-access-x7z8s" (OuterVolumeSpecName: "kube-api-access-x7z8s") pod "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" (UID: "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1"). InnerVolumeSpecName "kube-api-access-x7z8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.431344 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" (UID: "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.470675 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-config-data" (OuterVolumeSpecName: "config-data") pod "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" (UID: "d2d6ea04-be94-4620-b1a2-d24eaf65d7a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.488400 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.488430 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.488444 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7z8s\" (UniqueName: \"kubernetes.io/projected/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-kube-api-access-x7z8s\") on node \"crc\" DevicePath \"\"" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.488454 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d6ea04-be94-4620-b1a2-d24eaf65d7a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.932514 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522761-9s9zm" event={"ID":"d2d6ea04-be94-4620-b1a2-d24eaf65d7a1","Type":"ContainerDied","Data":"1d07c4321673ceb862341b38c6d41bf505dc89d18f40ef6c885ac9cd3046b50e"} Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.932822 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d07c4321673ceb862341b38c6d41bf505dc89d18f40ef6c885ac9cd3046b50e" Feb 17 22:01:06 crc kubenswrapper[4793]: I0217 22:01:06.932611 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522761-9s9zm" Feb 17 22:01:16 crc kubenswrapper[4793]: I0217 22:01:16.539459 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:01:16 crc kubenswrapper[4793]: E0217 22:01:16.540674 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:01:20 crc kubenswrapper[4793]: I0217 22:01:20.102141 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:01:20 crc kubenswrapper[4793]: I0217 22:01:20.103063 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:01:30 crc kubenswrapper[4793]: I0217 22:01:30.538916 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:01:31 crc kubenswrapper[4793]: I0217 22:01:31.201890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b"} Feb 17 22:01:33 crc kubenswrapper[4793]: I0217 22:01:33.225069 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" exitCode=1 Feb 17 22:01:33 crc kubenswrapper[4793]: I0217 22:01:33.225156 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b"} Feb 17 22:01:33 crc kubenswrapper[4793]: I0217 22:01:33.225467 4793 scope.go:117] "RemoveContainer" containerID="919a932fb17bc79b9327fd155734cadab75426a82e9e60728efeb959e2b364bc" Feb 17 22:01:33 crc kubenswrapper[4793]: I0217 22:01:33.226338 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:01:33 crc kubenswrapper[4793]: E0217 22:01:33.226927 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:01:35 crc kubenswrapper[4793]: I0217 22:01:35.596335 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:01:35 crc kubenswrapper[4793]: I0217 22:01:35.597322 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:01:35 crc kubenswrapper[4793]: I0217 22:01:35.597349 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:01:35 crc kubenswrapper[4793]: I0217 22:01:35.597369 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:01:35 crc kubenswrapper[4793]: I0217 22:01:35.598481 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:01:35 crc kubenswrapper[4793]: E0217 22:01:35.599102 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.724006 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7snp"] Feb 17 22:01:46 crc kubenswrapper[4793]: E0217 22:01:46.725355 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" containerName="keystone-cron" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.725377 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" containerName="keystone-cron" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.725755 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d6ea04-be94-4620-b1a2-d24eaf65d7a1" containerName="keystone-cron" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.728150 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.738746 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7snp"] Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.817826 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkjs\" (UniqueName: \"kubernetes.io/projected/47d1a053-fa70-4bb0-9c92-69c37b61726b-kube-api-access-vnkjs\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.817910 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-utilities\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.817945 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-catalog-content\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.920151 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnkjs\" (UniqueName: \"kubernetes.io/projected/47d1a053-fa70-4bb0-9c92-69c37b61726b-kube-api-access-vnkjs\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.920281 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-utilities\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.920337 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-catalog-content\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.920819 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-utilities\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.920871 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-catalog-content\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:46 crc kubenswrapper[4793]: I0217 22:01:46.938298 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnkjs\" (UniqueName: \"kubernetes.io/projected/47d1a053-fa70-4bb0-9c92-69c37b61726b-kube-api-access-vnkjs\") pod \"certified-operators-r7snp\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:47 crc kubenswrapper[4793]: I0217 22:01:47.066194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:47 crc kubenswrapper[4793]: I0217 22:01:47.682320 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7snp"] Feb 17 22:01:48 crc kubenswrapper[4793]: I0217 22:01:48.392909 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerID="d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24" exitCode=0 Feb 17 22:01:48 crc kubenswrapper[4793]: I0217 22:01:48.393266 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7snp" event={"ID":"47d1a053-fa70-4bb0-9c92-69c37b61726b","Type":"ContainerDied","Data":"d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24"} Feb 17 22:01:48 crc kubenswrapper[4793]: I0217 22:01:48.393299 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7snp" event={"ID":"47d1a053-fa70-4bb0-9c92-69c37b61726b","Type":"ContainerStarted","Data":"35be35b41b748a1d5307d85e8dc25ea905db010f78e9b147f9f36bc4cf333f07"} Feb 17 22:01:48 crc kubenswrapper[4793]: I0217 22:01:48.539156 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:01:48 crc kubenswrapper[4793]: E0217 22:01:48.539474 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:01:50 crc kubenswrapper[4793]: I0217 22:01:50.102139 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:01:50 crc kubenswrapper[4793]: I0217 22:01:50.102582 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:01:50 crc kubenswrapper[4793]: I0217 22:01:50.423021 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerID="e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb" exitCode=0 Feb 17 22:01:50 crc kubenswrapper[4793]: I0217 22:01:50.423068 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7snp" event={"ID":"47d1a053-fa70-4bb0-9c92-69c37b61726b","Type":"ContainerDied","Data":"e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb"} Feb 17 22:01:50 crc kubenswrapper[4793]: I0217 22:01:50.427021 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:01:51 crc kubenswrapper[4793]: I0217 22:01:51.436616 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7snp" event={"ID":"47d1a053-fa70-4bb0-9c92-69c37b61726b","Type":"ContainerStarted","Data":"0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66"} Feb 17 22:01:51 crc kubenswrapper[4793]: I0217 22:01:51.453192 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7snp" podStartSLOduration=3.028110348 podStartE2EDuration="5.45317364s" podCreationTimestamp="2026-02-17 22:01:46 +0000 UTC" firstStartedPulling="2026-02-17 22:01:48.395590737 +0000 UTC m=+6783.687289048" lastFinishedPulling="2026-02-17 22:01:50.820654029 +0000 UTC m=+6786.112352340" observedRunningTime="2026-02-17 22:01:51.452216536 +0000 UTC m=+6786.743914877" watchObservedRunningTime="2026-02-17 22:01:51.45317364 +0000 UTC m=+6786.744871951" Feb 17 22:01:57 crc kubenswrapper[4793]: I0217 22:01:57.067234 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:57 crc kubenswrapper[4793]: I0217 22:01:57.067988 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:57 crc kubenswrapper[4793]: I0217 22:01:57.165861 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:57 crc kubenswrapper[4793]: I0217 22:01:57.771497 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:01:57 crc kubenswrapper[4793]: I0217 22:01:57.825674 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7snp"] Feb 17 22:01:59 crc kubenswrapper[4793]: I0217 22:01:59.724183 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7snp" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="registry-server" containerID="cri-o://0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66" gracePeriod=2 Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.198190 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.302780 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-utilities\") pod \"47d1a053-fa70-4bb0-9c92-69c37b61726b\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.302952 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-catalog-content\") pod \"47d1a053-fa70-4bb0-9c92-69c37b61726b\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.303103 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnkjs\" (UniqueName: \"kubernetes.io/projected/47d1a053-fa70-4bb0-9c92-69c37b61726b-kube-api-access-vnkjs\") pod \"47d1a053-fa70-4bb0-9c92-69c37b61726b\" (UID: \"47d1a053-fa70-4bb0-9c92-69c37b61726b\") " Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.303672 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-utilities" (OuterVolumeSpecName: "utilities") pod "47d1a053-fa70-4bb0-9c92-69c37b61726b" (UID: "47d1a053-fa70-4bb0-9c92-69c37b61726b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.314503 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d1a053-fa70-4bb0-9c92-69c37b61726b-kube-api-access-vnkjs" (OuterVolumeSpecName: "kube-api-access-vnkjs") pod "47d1a053-fa70-4bb0-9c92-69c37b61726b" (UID: "47d1a053-fa70-4bb0-9c92-69c37b61726b"). InnerVolumeSpecName "kube-api-access-vnkjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.366419 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47d1a053-fa70-4bb0-9c92-69c37b61726b" (UID: "47d1a053-fa70-4bb0-9c92-69c37b61726b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.405424 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnkjs\" (UniqueName: \"kubernetes.io/projected/47d1a053-fa70-4bb0-9c92-69c37b61726b-kube-api-access-vnkjs\") on node \"crc\" DevicePath \"\"" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.405470 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.405490 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47d1a053-fa70-4bb0-9c92-69c37b61726b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.539089 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:02:00 crc kubenswrapper[4793]: E0217 22:02:00.539326 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.747771 4793 generic.go:334] "Generic (PLEG): container finished" podID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerID="0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66" exitCode=0 Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.747856 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7snp" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.747909 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7snp" event={"ID":"47d1a053-fa70-4bb0-9c92-69c37b61726b","Type":"ContainerDied","Data":"0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66"} Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.748238 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7snp" event={"ID":"47d1a053-fa70-4bb0-9c92-69c37b61726b","Type":"ContainerDied","Data":"35be35b41b748a1d5307d85e8dc25ea905db010f78e9b147f9f36bc4cf333f07"} Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.748286 4793 scope.go:117] "RemoveContainer" containerID="0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.802739 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7snp"] Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.803186 4793 scope.go:117] "RemoveContainer" containerID="e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.818159 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7snp"] Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.840101 4793 scope.go:117] "RemoveContainer" containerID="d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.896435 4793 scope.go:117] "RemoveContainer" containerID="0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66" Feb 17 22:02:00 crc kubenswrapper[4793]: E0217 22:02:00.896895 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66\": container with ID starting with 0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66 not found: ID does not exist" containerID="0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.897004 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66"} err="failed to get container status \"0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66\": rpc error: code = NotFound desc = could not find container \"0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66\": container with ID starting with 0fd436e51e4e5b2ae41431b9b7f03c74dc2d6afa6bd5192d4843586ee25dda66 not found: ID does not exist" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.897085 4793 scope.go:117] "RemoveContainer" containerID="e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb" Feb 17 22:02:00 crc kubenswrapper[4793]: E0217 22:02:00.897533 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb\": container with ID starting with e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb not found: ID does not exist" containerID="e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.897637 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb"} err="failed to get container status \"e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb\": rpc error: code = NotFound desc = could not find container \"e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb\": container with ID starting with e80d1441142a2cb8f6595ef9800fd7af9fb204480eeb511438b7cb80b43386fb not found: ID does not exist" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.897730 4793 scope.go:117] "RemoveContainer" containerID="d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24" Feb 17 22:02:00 crc kubenswrapper[4793]: E0217 22:02:00.898010 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24\": container with ID starting with d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24 not found: ID does not exist" containerID="d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24" Feb 17 22:02:00 crc kubenswrapper[4793]: I0217 22:02:00.898043 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24"} err="failed to get container status \"d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24\": rpc error: code = NotFound desc = could not find container \"d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24\": container with ID starting with d93c7853fe6b08029b8a61cd0b1ad029124c3637a6d936fb5d4352f964a50b24 not found: ID does not exist" Feb 17 22:02:01 crc kubenswrapper[4793]: I0217 22:02:01.563625 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" path="/var/lib/kubelet/pods/47d1a053-fa70-4bb0-9c92-69c37b61726b/volumes" Feb 17 22:02:12 crc kubenswrapper[4793]: I0217 22:02:12.539052 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:02:12 crc kubenswrapper[4793]: E0217 22:02:12.539873 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.101953 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.102673 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.102771 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.104025 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7855af0e308564a81edffdb4a3f30cd83142c30e1fb209a1cc316f45f96def57"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.104144 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://7855af0e308564a81edffdb4a3f30cd83142c30e1fb209a1cc316f45f96def57" gracePeriod=600 Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.990327 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="7855af0e308564a81edffdb4a3f30cd83142c30e1fb209a1cc316f45f96def57" exitCode=0 Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.990472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"7855af0e308564a81edffdb4a3f30cd83142c30e1fb209a1cc316f45f96def57"} Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.990961 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea"} Feb 17 22:02:20 crc kubenswrapper[4793]: I0217 22:02:20.990988 4793 scope.go:117] "RemoveContainer" containerID="f49ab2db045f279d9f7309c213173f2962c246236ace994df6cafec80f8b10bd" Feb 17 22:02:27 crc kubenswrapper[4793]: I0217 22:02:27.538992 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:02:27 crc kubenswrapper[4793]: E0217 22:02:27.539893 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:02:39 crc kubenswrapper[4793]: I0217 22:02:39.541300 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:02:39 crc kubenswrapper[4793]: E0217 22:02:39.542650 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:02:53 crc kubenswrapper[4793]: I0217 22:02:53.539313 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:02:53 crc kubenswrapper[4793]: E0217 22:02:53.540477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:03:05 crc kubenswrapper[4793]: I0217 22:03:05.547052 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:03:05 crc kubenswrapper[4793]: E0217 22:03:05.548154 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:03:16 crc kubenswrapper[4793]: I0217 22:03:16.538861 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:03:16 crc kubenswrapper[4793]: E0217 22:03:16.540094 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:03:30 crc kubenswrapper[4793]: I0217 22:03:30.538858 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:03:30 crc kubenswrapper[4793]: E0217 22:03:30.539706 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:03:43 crc kubenswrapper[4793]: I0217 22:03:43.539113 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:03:43 crc kubenswrapper[4793]: E0217 22:03:43.540321 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:03:56 crc kubenswrapper[4793]: I0217 22:03:56.539038 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:03:56 crc kubenswrapper[4793]: E0217 22:03:56.539818 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:04:09 crc kubenswrapper[4793]: I0217 22:04:09.542206 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:04:09 crc kubenswrapper[4793]: E0217 22:04:09.543195 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:04:20 crc kubenswrapper[4793]: I0217 22:04:20.101544 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:04:20 crc kubenswrapper[4793]: I0217 22:04:20.102091 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:04:22 crc kubenswrapper[4793]: I0217 22:04:22.539117 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:04:22 crc kubenswrapper[4793]: E0217 22:04:22.539894 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:04:35 crc kubenswrapper[4793]: I0217 22:04:35.550818 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:04:35 crc kubenswrapper[4793]: E0217 22:04:35.553371 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:04:47 crc kubenswrapper[4793]: I0217 22:04:47.539535 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:04:47 crc kubenswrapper[4793]: E0217 22:04:47.540967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:04:50 crc kubenswrapper[4793]: I0217 22:04:50.101907 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:04:50 crc kubenswrapper[4793]: I0217 22:04:50.102371 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:05:00 crc kubenswrapper[4793]: I0217 22:05:00.539484 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:05:00 crc kubenswrapper[4793]: E0217 22:05:00.540367 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:05:11 crc kubenswrapper[4793]: I0217 22:05:11.539763 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:05:11 crc kubenswrapper[4793]: E0217 22:05:11.541252 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:05:20 crc kubenswrapper[4793]: I0217 22:05:20.101506 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:05:20 crc kubenswrapper[4793]: I0217 22:05:20.102250 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:05:20 crc kubenswrapper[4793]: I0217 22:05:20.102294 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:05:20 crc kubenswrapper[4793]: I0217 22:05:20.103075 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:05:20 crc kubenswrapper[4793]: I0217 22:05:20.103140 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" gracePeriod=600 Feb 17 22:05:20 crc kubenswrapper[4793]: E0217 22:05:20.260819 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:05:21 crc kubenswrapper[4793]: I0217 22:05:21.153200 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" exitCode=0 Feb 17 22:05:21 crc kubenswrapper[4793]: I0217 22:05:21.158960 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea"} Feb 17 22:05:21 crc kubenswrapper[4793]: I0217 22:05:21.159049 4793 scope.go:117] "RemoveContainer" containerID="7855af0e308564a81edffdb4a3f30cd83142c30e1fb209a1cc316f45f96def57" Feb 17 22:05:21 crc kubenswrapper[4793]: I0217 22:05:21.160404 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:05:21 crc kubenswrapper[4793]: E0217 22:05:21.161193 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:05:24 crc kubenswrapper[4793]: I0217 22:05:24.539524 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:05:24 crc kubenswrapper[4793]: E0217 22:05:24.540368 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:05:31 crc kubenswrapper[4793]: I0217 22:05:31.539639 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:05:31 crc kubenswrapper[4793]: E0217 22:05:31.540432 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:05:37 crc kubenswrapper[4793]: I0217 22:05:37.539542 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:05:37 crc kubenswrapper[4793]: E0217 22:05:37.540584 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:05:46 crc kubenswrapper[4793]: I0217 22:05:46.540050 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:05:46 crc kubenswrapper[4793]: E0217 22:05:46.541302 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:05:49 crc kubenswrapper[4793]: I0217 22:05:49.538577 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:05:49 crc kubenswrapper[4793]: E0217 22:05:49.539469 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:05:59 crc kubenswrapper[4793]: I0217 22:05:59.539638 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:05:59 crc kubenswrapper[4793]: E0217 22:05:59.540585 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:06:00 crc kubenswrapper[4793]: I0217 22:06:00.540072 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:06:00 crc kubenswrapper[4793]: E0217 22:06:00.541001 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:06:12 crc kubenswrapper[4793]: I0217 22:06:12.539008 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:06:12 crc kubenswrapper[4793]: E0217 22:06:12.540050 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:06:15 crc kubenswrapper[4793]: I0217 22:06:15.551993 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:06:15 crc kubenswrapper[4793]: E0217 22:06:15.552856 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:06:24 crc kubenswrapper[4793]: I0217 22:06:24.538785 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:06:24 crc kubenswrapper[4793]: E0217 22:06:24.539305 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:06:26 crc kubenswrapper[4793]: I0217 22:06:26.539471 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:06:26 crc kubenswrapper[4793]: E0217 22:06:26.539941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:06:36 crc kubenswrapper[4793]: I0217 22:06:36.538268 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:06:36 crc kubenswrapper[4793]: E0217 22:06:36.541901 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:06:37 crc kubenswrapper[4793]: I0217 22:06:37.538863 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:06:38 crc kubenswrapper[4793]: I0217 22:06:38.386023 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f"} Feb 17 22:06:40 crc kubenswrapper[4793]: I0217 22:06:40.432050 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" exitCode=1 Feb 17 22:06:40 crc kubenswrapper[4793]: I0217 22:06:40.433191 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f"} Feb 17 22:06:40 crc kubenswrapper[4793]: I0217 22:06:40.433233 4793 scope.go:117] "RemoveContainer" containerID="7eda7aed50fd7e424e7dfd3f434a21e0e2c32c953e7ea26c6ed10a024db8f88b" Feb 17 22:06:40 crc kubenswrapper[4793]: I0217 22:06:40.434361 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:06:40 crc kubenswrapper[4793]: E0217 22:06:40.435119 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:06:40 crc kubenswrapper[4793]: I0217 22:06:40.596167 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:06:41 crc kubenswrapper[4793]: I0217 22:06:41.448468 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:06:41 crc kubenswrapper[4793]: E0217 22:06:41.449240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:06:45 crc kubenswrapper[4793]: I0217 22:06:45.596543 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:06:45 crc kubenswrapper[4793]: I0217 22:06:45.596838 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:06:45 crc kubenswrapper[4793]: I0217 22:06:45.596850 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:06:45 crc kubenswrapper[4793]: I0217 22:06:45.597431 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:06:45 crc kubenswrapper[4793]: E0217 22:06:45.597656 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:06:50 crc kubenswrapper[4793]: I0217 22:06:50.539156 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:06:50 crc kubenswrapper[4793]: E0217 22:06:50.540043 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:06:59 crc kubenswrapper[4793]: I0217 22:06:59.542882 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:06:59 crc kubenswrapper[4793]: E0217 22:06:59.543706 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:07:03 crc kubenswrapper[4793]: I0217 22:07:03.539884 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:07:03 crc kubenswrapper[4793]: E0217 22:07:03.540956 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:07:14 crc kubenswrapper[4793]: I0217 22:07:14.539161 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:07:14 crc kubenswrapper[4793]: I0217 22:07:14.539959 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:07:14 crc kubenswrapper[4793]: E0217 22:07:14.540167 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:07:14 crc kubenswrapper[4793]: E0217 22:07:14.540182 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.511228 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t57gg"] Feb 17 22:07:18 crc kubenswrapper[4793]: E0217 22:07:18.512309 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="registry-server" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.512333 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="registry-server" Feb 17 22:07:18 crc kubenswrapper[4793]: E0217 22:07:18.512360 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="extract-utilities" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.512373 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="extract-utilities" Feb 17 22:07:18 crc kubenswrapper[4793]: E0217 22:07:18.512398 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="extract-content" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.512412 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="extract-content" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.512775 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d1a053-fa70-4bb0-9c92-69c37b61726b" containerName="registry-server" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.515290 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.545090 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t57gg"] Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.658102 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwlj\" (UniqueName: \"kubernetes.io/projected/064a2484-c421-44d0-a607-7b02fbda3265-kube-api-access-qmwlj\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.658235 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-utilities\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.658879 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-catalog-content\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.761737 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-catalog-content\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.762133 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwlj\" (UniqueName: \"kubernetes.io/projected/064a2484-c421-44d0-a607-7b02fbda3265-kube-api-access-qmwlj\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.762176 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-utilities\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.762675 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-catalog-content\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.763898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-utilities\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.781207 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwlj\" (UniqueName: \"kubernetes.io/projected/064a2484-c421-44d0-a607-7b02fbda3265-kube-api-access-qmwlj\") pod \"redhat-marketplace-t57gg\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:18 crc kubenswrapper[4793]: I0217 22:07:18.852937 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:19 crc kubenswrapper[4793]: I0217 22:07:19.399712 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t57gg"] Feb 17 22:07:19 crc kubenswrapper[4793]: I0217 22:07:19.931300 4793 generic.go:334] "Generic (PLEG): container finished" podID="064a2484-c421-44d0-a607-7b02fbda3265" containerID="9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9" exitCode=0 Feb 17 22:07:19 crc kubenswrapper[4793]: I0217 22:07:19.931376 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t57gg" event={"ID":"064a2484-c421-44d0-a607-7b02fbda3265","Type":"ContainerDied","Data":"9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9"} Feb 17 22:07:19 crc kubenswrapper[4793]: I0217 22:07:19.931946 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t57gg" event={"ID":"064a2484-c421-44d0-a607-7b02fbda3265","Type":"ContainerStarted","Data":"3f6d28194b2f2dddd6dacf1e29b6693445d690b3dcf89f75add9a6f7be6cb0ae"} Feb 17 22:07:19 crc kubenswrapper[4793]: I0217 22:07:19.934641 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:07:20 crc kubenswrapper[4793]: I0217 22:07:20.946633 4793 generic.go:334] "Generic (PLEG): container finished" podID="064a2484-c421-44d0-a607-7b02fbda3265" containerID="72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f" exitCode=0 Feb 17 22:07:20 crc kubenswrapper[4793]: I0217 22:07:20.947165 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t57gg" event={"ID":"064a2484-c421-44d0-a607-7b02fbda3265","Type":"ContainerDied","Data":"72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f"} Feb 17 22:07:22 crc kubenswrapper[4793]: I0217 22:07:22.974710 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t57gg" event={"ID":"064a2484-c421-44d0-a607-7b02fbda3265","Type":"ContainerStarted","Data":"d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813"} Feb 17 22:07:23 crc kubenswrapper[4793]: I0217 22:07:23.009453 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t57gg" podStartSLOduration=2.5618483210000003 podStartE2EDuration="5.009428559s" podCreationTimestamp="2026-02-17 22:07:18 +0000 UTC" firstStartedPulling="2026-02-17 22:07:19.934223382 +0000 UTC m=+7115.225921733" lastFinishedPulling="2026-02-17 22:07:22.38180363 +0000 UTC m=+7117.673501971" observedRunningTime="2026-02-17 22:07:22.995187838 +0000 UTC m=+7118.286886139" watchObservedRunningTime="2026-02-17 22:07:23.009428559 +0000 UTC m=+7118.301126880" Feb 17 22:07:26 crc kubenswrapper[4793]: I0217 22:07:26.539550 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:07:26 crc kubenswrapper[4793]: E0217 22:07:26.540656 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:07:28 crc kubenswrapper[4793]: I0217 22:07:28.853380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:28 crc kubenswrapper[4793]: I0217 22:07:28.853479 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:28 crc kubenswrapper[4793]: I0217 22:07:28.928545 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:29 crc kubenswrapper[4793]: I0217 22:07:29.111084 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:29 crc kubenswrapper[4793]: I0217 22:07:29.189603 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t57gg"] Feb 17 22:07:29 crc kubenswrapper[4793]: I0217 22:07:29.561741 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:07:29 crc kubenswrapper[4793]: E0217 22:07:29.562215 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.081100 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t57gg" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="registry-server" containerID="cri-o://d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813" gracePeriod=2 Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.589447 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.688985 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwlj\" (UniqueName: \"kubernetes.io/projected/064a2484-c421-44d0-a607-7b02fbda3265-kube-api-access-qmwlj\") pod \"064a2484-c421-44d0-a607-7b02fbda3265\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.689415 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-utilities\") pod \"064a2484-c421-44d0-a607-7b02fbda3265\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.689546 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-catalog-content\") pod \"064a2484-c421-44d0-a607-7b02fbda3265\" (UID: \"064a2484-c421-44d0-a607-7b02fbda3265\") " Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.691146 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-utilities" (OuterVolumeSpecName: "utilities") pod "064a2484-c421-44d0-a607-7b02fbda3265" (UID: "064a2484-c421-44d0-a607-7b02fbda3265"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.695055 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064a2484-c421-44d0-a607-7b02fbda3265-kube-api-access-qmwlj" (OuterVolumeSpecName: "kube-api-access-qmwlj") pod "064a2484-c421-44d0-a607-7b02fbda3265" (UID: "064a2484-c421-44d0-a607-7b02fbda3265"). InnerVolumeSpecName "kube-api-access-qmwlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.714599 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "064a2484-c421-44d0-a607-7b02fbda3265" (UID: "064a2484-c421-44d0-a607-7b02fbda3265"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.791991 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwlj\" (UniqueName: \"kubernetes.io/projected/064a2484-c421-44d0-a607-7b02fbda3265-kube-api-access-qmwlj\") on node \"crc\" DevicePath \"\"" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.792028 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:07:31 crc kubenswrapper[4793]: I0217 22:07:31.792040 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064a2484-c421-44d0-a607-7b02fbda3265-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.098257 4793 generic.go:334] "Generic (PLEG): container finished" podID="064a2484-c421-44d0-a607-7b02fbda3265" containerID="d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813" exitCode=0 Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.098317 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t57gg" event={"ID":"064a2484-c421-44d0-a607-7b02fbda3265","Type":"ContainerDied","Data":"d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813"} Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.098375 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t57gg" event={"ID":"064a2484-c421-44d0-a607-7b02fbda3265","Type":"ContainerDied","Data":"3f6d28194b2f2dddd6dacf1e29b6693445d690b3dcf89f75add9a6f7be6cb0ae"} Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.098384 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t57gg" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.098402 4793 scope.go:117] "RemoveContainer" containerID="d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.134334 4793 scope.go:117] "RemoveContainer" containerID="72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.170137 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t57gg"] Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.180710 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t57gg"] Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.185513 4793 scope.go:117] "RemoveContainer" containerID="9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.251900 4793 scope.go:117] "RemoveContainer" containerID="d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813" Feb 17 22:07:32 crc kubenswrapper[4793]: E0217 22:07:32.252479 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813\": container with ID starting with d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813 not found: ID does not exist" containerID="d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.252552 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813"} err="failed to get container status \"d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813\": rpc error: code = NotFound desc = could not find container \"d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813\": container with ID starting with d4d03f30f07f3dde63b6fa197d1deed43ab27c6e9ee6f400010674225cfa7813 not found: ID does not exist" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.252603 4793 scope.go:117] "RemoveContainer" containerID="72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f" Feb 17 22:07:32 crc kubenswrapper[4793]: E0217 22:07:32.253086 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f\": container with ID starting with 72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f not found: ID does not exist" containerID="72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.253235 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f"} err="failed to get container status \"72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f\": rpc error: code = NotFound desc = could not find container \"72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f\": container with ID starting with 72807527da43c9046ea7ddfaa12e1f796d0d5154722137249916319479ae2c1f not found: ID does not exist" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.253275 4793 scope.go:117] "RemoveContainer" containerID="9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9" Feb 17 22:07:32 crc kubenswrapper[4793]: E0217 22:07:32.253536 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9\": container with ID starting with 9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9 not found: ID does not exist" containerID="9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9" Feb 17 22:07:32 crc kubenswrapper[4793]: I0217 22:07:32.253585 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9"} err="failed to get container status \"9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9\": rpc error: code = NotFound desc = could not find container \"9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9\": container with ID starting with 9d171d75e7db65ac69206a8895ecb3304e537df06b4032c6ccfe34e9b3b8b3b9 not found: ID does not exist" Feb 17 22:07:33 crc kubenswrapper[4793]: I0217 22:07:33.553985 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064a2484-c421-44d0-a607-7b02fbda3265" path="/var/lib/kubelet/pods/064a2484-c421-44d0-a607-7b02fbda3265/volumes" Feb 17 22:07:38 crc kubenswrapper[4793]: I0217 22:07:38.540372 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:07:38 crc kubenswrapper[4793]: E0217 22:07:38.541390 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:07:40 crc kubenswrapper[4793]: I0217 22:07:40.538283 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:07:40 crc kubenswrapper[4793]: E0217 22:07:40.538798 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:07:52 crc kubenswrapper[4793]: I0217 22:07:52.538931 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:07:52 crc kubenswrapper[4793]: E0217 22:07:52.541252 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:07:53 crc kubenswrapper[4793]: I0217 22:07:53.539074 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:07:53 crc kubenswrapper[4793]: E0217 22:07:53.539579 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:08:04 crc kubenswrapper[4793]: I0217 22:08:04.539375 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:08:04 crc kubenswrapper[4793]: E0217 22:08:04.540402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:08:07 crc kubenswrapper[4793]: I0217 22:08:07.539611 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:08:07 crc kubenswrapper[4793]: E0217 22:08:07.540722 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:08:16 crc kubenswrapper[4793]: I0217 22:08:16.539752 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:08:16 crc kubenswrapper[4793]: E0217 22:08:16.541096 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:08:18 crc kubenswrapper[4793]: I0217 22:08:18.539643 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:08:18 crc kubenswrapper[4793]: E0217 22:08:18.540449 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:08:30 crc kubenswrapper[4793]: I0217 22:08:30.539368 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:08:30 crc kubenswrapper[4793]: I0217 22:08:30.540001 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:08:30 crc kubenswrapper[4793]: E0217 22:08:30.540194 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:08:30 crc kubenswrapper[4793]: E0217 22:08:30.540498 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:08:42 crc kubenswrapper[4793]: I0217 22:08:42.539066 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:08:42 crc kubenswrapper[4793]: E0217 22:08:42.540201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:08:44 crc kubenswrapper[4793]: I0217 22:08:44.539619 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:08:44 crc kubenswrapper[4793]: E0217 22:08:44.540510 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.663298 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pbfz5"] Feb 17 22:08:50 crc kubenswrapper[4793]: E0217 22:08:50.664291 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="registry-server" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.664306 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="registry-server" Feb 17 22:08:50 crc kubenswrapper[4793]: E0217 22:08:50.664328 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="extract-utilities" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.664336 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="extract-utilities" Feb 17 22:08:50 crc kubenswrapper[4793]: E0217 22:08:50.664380 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="extract-content" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.664389 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="extract-content" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.664615 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="064a2484-c421-44d0-a607-7b02fbda3265" containerName="registry-server" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.666471 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.700883 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pbfz5"] Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.776948 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-catalog-content\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.777103 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkwq\" (UniqueName: \"kubernetes.io/projected/e5d61c74-d88e-41ec-9563-0130f0772231-kube-api-access-wzkwq\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.777238 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-utilities\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.879389 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-utilities\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.879479 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-catalog-content\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.879568 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkwq\" (UniqueName: \"kubernetes.io/projected/e5d61c74-d88e-41ec-9563-0130f0772231-kube-api-access-wzkwq\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.880567 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-catalog-content\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.880637 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-utilities\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.899561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkwq\" (UniqueName: \"kubernetes.io/projected/e5d61c74-d88e-41ec-9563-0130f0772231-kube-api-access-wzkwq\") pod \"community-operators-pbfz5\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:50 crc kubenswrapper[4793]: I0217 22:08:50.993240 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:08:51 crc kubenswrapper[4793]: I0217 22:08:51.525614 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pbfz5"] Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.074529 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jzmj2"] Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.077182 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.098433 4793 generic.go:334] "Generic (PLEG): container finished" podID="e5d61c74-d88e-41ec-9563-0130f0772231" containerID="edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8" exitCode=0 Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.098522 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerDied","Data":"edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8"} Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.098578 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerStarted","Data":"e876f9e8e8781ca34ec02138e82d7a114b399b0276ca32e3a59492a0bc83c8a8"} Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.111743 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzmj2"] Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.214601 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-catalog-content\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.214653 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vfx\" (UniqueName: \"kubernetes.io/projected/104f786e-e88e-4e02-916d-7ff0c6b67cb6-kube-api-access-l8vfx\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.214725 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-utilities\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.317833 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-catalog-content\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.317895 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vfx\" (UniqueName: \"kubernetes.io/projected/104f786e-e88e-4e02-916d-7ff0c6b67cb6-kube-api-access-l8vfx\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.317941 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-utilities\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.318323 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-catalog-content\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.318430 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-utilities\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.346117 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vfx\" (UniqueName: \"kubernetes.io/projected/104f786e-e88e-4e02-916d-7ff0c6b67cb6-kube-api-access-l8vfx\") pod \"redhat-operators-jzmj2\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.430640 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:08:52 crc kubenswrapper[4793]: I0217 22:08:52.915360 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzmj2"] Feb 17 22:08:53 crc kubenswrapper[4793]: I0217 22:08:53.108170 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerStarted","Data":"4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9"} Feb 17 22:08:53 crc kubenswrapper[4793]: I0217 22:08:53.108212 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerStarted","Data":"c5b5f6fb3133e6c96af63b0eec3efc0a30702f4508e13b3e8ee090184708e162"} Feb 17 22:08:53 crc kubenswrapper[4793]: I0217 22:08:53.111343 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerStarted","Data":"082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80"} Feb 17 22:08:53 crc kubenswrapper[4793]: I0217 22:08:53.538540 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:08:53 crc kubenswrapper[4793]: E0217 22:08:53.539152 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:08:54 crc kubenswrapper[4793]: I0217 22:08:54.128406 4793 generic.go:334] "Generic (PLEG): container finished" podID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerID="4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9" exitCode=0 Feb 17 22:08:54 crc kubenswrapper[4793]: I0217 22:08:54.128474 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerDied","Data":"4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9"} Feb 17 22:08:54 crc kubenswrapper[4793]: I0217 22:08:54.132534 4793 generic.go:334] "Generic (PLEG): container finished" podID="e5d61c74-d88e-41ec-9563-0130f0772231" containerID="082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80" exitCode=0 Feb 17 22:08:54 crc kubenswrapper[4793]: I0217 22:08:54.132629 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerDied","Data":"082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80"} Feb 17 22:08:55 crc kubenswrapper[4793]: I0217 22:08:55.142653 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerStarted","Data":"1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892"} Feb 17 22:08:55 crc kubenswrapper[4793]: I0217 22:08:55.198529 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pbfz5" podStartSLOduration=2.736307479 podStartE2EDuration="5.198505838s" podCreationTimestamp="2026-02-17 22:08:50 +0000 UTC" firstStartedPulling="2026-02-17 22:08:52.101945763 +0000 UTC m=+7207.393644084" lastFinishedPulling="2026-02-17 22:08:54.564144132 +0000 UTC m=+7209.855842443" observedRunningTime="2026-02-17 22:08:55.186707047 +0000 UTC m=+7210.478405358" watchObservedRunningTime="2026-02-17 22:08:55.198505838 +0000 UTC m=+7210.490204169" Feb 17 22:08:56 crc kubenswrapper[4793]: I0217 22:08:56.155368 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerStarted","Data":"c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed"} Feb 17 22:08:58 crc kubenswrapper[4793]: I0217 22:08:58.180127 4793 generic.go:334] "Generic (PLEG): container finished" podID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerID="c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed" exitCode=0 Feb 17 22:08:58 crc kubenswrapper[4793]: I0217 22:08:58.180229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerDied","Data":"c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed"} Feb 17 22:08:58 crc kubenswrapper[4793]: I0217 22:08:58.539997 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:08:58 crc kubenswrapper[4793]: E0217 22:08:58.540674 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:09:00 crc kubenswrapper[4793]: I0217 22:09:00.211958 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerStarted","Data":"daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152"} Feb 17 22:09:00 crc kubenswrapper[4793]: I0217 22:09:00.244821 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jzmj2" podStartSLOduration=3.056924865 podStartE2EDuration="8.2447808s" podCreationTimestamp="2026-02-17 22:08:52 +0000 UTC" firstStartedPulling="2026-02-17 22:08:54.131357607 +0000 UTC m=+7209.423055918" lastFinishedPulling="2026-02-17 22:08:59.319213502 +0000 UTC m=+7214.610911853" observedRunningTime="2026-02-17 22:09:00.232684202 +0000 UTC m=+7215.524382553" watchObservedRunningTime="2026-02-17 22:09:00.2447808 +0000 UTC m=+7215.536479121" Feb 17 22:09:00 crc kubenswrapper[4793]: I0217 22:09:00.994421 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:09:00 crc kubenswrapper[4793]: I0217 22:09:00.994767 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:09:01 crc kubenswrapper[4793]: I0217 22:09:01.078300 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:09:01 crc kubenswrapper[4793]: I0217 22:09:01.278916 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:09:02 crc kubenswrapper[4793]: I0217 22:09:02.046736 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pbfz5"] Feb 17 22:09:02 crc kubenswrapper[4793]: I0217 22:09:02.431188 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:09:02 crc kubenswrapper[4793]: I0217 22:09:02.432102 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.242829 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pbfz5" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="registry-server" containerID="cri-o://1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892" gracePeriod=2 Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.496448 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzmj2" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="registry-server" probeResult="failure" output=< Feb 17 22:09:03 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 22:09:03 crc kubenswrapper[4793]: > Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.781365 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.891727 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzkwq\" (UniqueName: \"kubernetes.io/projected/e5d61c74-d88e-41ec-9563-0130f0772231-kube-api-access-wzkwq\") pod \"e5d61c74-d88e-41ec-9563-0130f0772231\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.891955 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-utilities\") pod \"e5d61c74-d88e-41ec-9563-0130f0772231\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.891985 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-catalog-content\") pod \"e5d61c74-d88e-41ec-9563-0130f0772231\" (UID: \"e5d61c74-d88e-41ec-9563-0130f0772231\") " Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.892933 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-utilities" (OuterVolumeSpecName: "utilities") pod "e5d61c74-d88e-41ec-9563-0130f0772231" (UID: "e5d61c74-d88e-41ec-9563-0130f0772231"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.903931 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d61c74-d88e-41ec-9563-0130f0772231-kube-api-access-wzkwq" (OuterVolumeSpecName: "kube-api-access-wzkwq") pod "e5d61c74-d88e-41ec-9563-0130f0772231" (UID: "e5d61c74-d88e-41ec-9563-0130f0772231"). InnerVolumeSpecName "kube-api-access-wzkwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.978166 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5d61c74-d88e-41ec-9563-0130f0772231" (UID: "e5d61c74-d88e-41ec-9563-0130f0772231"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.994348 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzkwq\" (UniqueName: \"kubernetes.io/projected/e5d61c74-d88e-41ec-9563-0130f0772231-kube-api-access-wzkwq\") on node \"crc\" DevicePath \"\"" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.994398 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:09:03 crc kubenswrapper[4793]: I0217 22:09:03.994416 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d61c74-d88e-41ec-9563-0130f0772231-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.254931 4793 generic.go:334] "Generic (PLEG): container finished" podID="e5d61c74-d88e-41ec-9563-0130f0772231" containerID="1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892" exitCode=0 Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.254988 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pbfz5" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.254995 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerDied","Data":"1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892"} Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.255761 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pbfz5" event={"ID":"e5d61c74-d88e-41ec-9563-0130f0772231","Type":"ContainerDied","Data":"e876f9e8e8781ca34ec02138e82d7a114b399b0276ca32e3a59492a0bc83c8a8"} Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.255819 4793 scope.go:117] "RemoveContainer" containerID="1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.292086 4793 scope.go:117] "RemoveContainer" containerID="082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.309606 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pbfz5"] Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.318286 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pbfz5"] Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.326505 4793 scope.go:117] "RemoveContainer" containerID="edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.394446 4793 scope.go:117] "RemoveContainer" containerID="1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892" Feb 17 22:09:04 crc kubenswrapper[4793]: E0217 22:09:04.394916 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892\": container with ID starting with 1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892 not found: ID does not exist" containerID="1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.394955 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892"} err="failed to get container status \"1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892\": rpc error: code = NotFound desc = could not find container \"1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892\": container with ID starting with 1db23be6768dcd7c4990c62f40226274d577ec6f7e27cc78189641b026e56892 not found: ID does not exist" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.394981 4793 scope.go:117] "RemoveContainer" containerID="082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80" Feb 17 22:09:04 crc kubenswrapper[4793]: E0217 22:09:04.395357 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80\": container with ID starting with 082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80 not found: ID does not exist" containerID="082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.395401 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80"} err="failed to get container status \"082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80\": rpc error: code = NotFound desc = could not find container \"082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80\": container with ID starting with 082e72944e24f74f1a5ad21d9ed4fb3df0d06e39e5073863501576d3473d7f80 not found: ID does not exist" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.395426 4793 scope.go:117] "RemoveContainer" containerID="edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8" Feb 17 22:09:04 crc kubenswrapper[4793]: E0217 22:09:04.395728 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8\": container with ID starting with edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8 not found: ID does not exist" containerID="edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8" Feb 17 22:09:04 crc kubenswrapper[4793]: I0217 22:09:04.395750 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8"} err="failed to get container status \"edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8\": rpc error: code = NotFound desc = could not find container \"edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8\": container with ID starting with edebbd103bbcfe8d2ae47fd8c62c7e765ae697e752b2afe23469c506cb47e5e8 not found: ID does not exist" Feb 17 22:09:05 crc kubenswrapper[4793]: I0217 22:09:05.562926 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" path="/var/lib/kubelet/pods/e5d61c74-d88e-41ec-9563-0130f0772231/volumes" Feb 17 22:09:08 crc kubenswrapper[4793]: I0217 22:09:08.540041 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:09:08 crc kubenswrapper[4793]: E0217 22:09:08.541051 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:09:12 crc kubenswrapper[4793]: I0217 22:09:12.534042 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:09:12 crc kubenswrapper[4793]: I0217 22:09:12.538904 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:09:12 crc kubenswrapper[4793]: E0217 22:09:12.539277 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:09:12 crc kubenswrapper[4793]: I0217 22:09:12.605186 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:09:12 crc kubenswrapper[4793]: I0217 22:09:12.787627 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzmj2"] Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.364484 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jzmj2" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="registry-server" containerID="cri-o://daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152" gracePeriod=2 Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.836271 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.984191 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-utilities\") pod \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.984457 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-catalog-content\") pod \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.984678 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vfx\" (UniqueName: \"kubernetes.io/projected/104f786e-e88e-4e02-916d-7ff0c6b67cb6-kube-api-access-l8vfx\") pod \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\" (UID: \"104f786e-e88e-4e02-916d-7ff0c6b67cb6\") " Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.985622 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-utilities" (OuterVolumeSpecName: "utilities") pod "104f786e-e88e-4e02-916d-7ff0c6b67cb6" (UID: "104f786e-e88e-4e02-916d-7ff0c6b67cb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.987214 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:09:14 crc kubenswrapper[4793]: I0217 22:09:14.997309 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104f786e-e88e-4e02-916d-7ff0c6b67cb6-kube-api-access-l8vfx" (OuterVolumeSpecName: "kube-api-access-l8vfx") pod "104f786e-e88e-4e02-916d-7ff0c6b67cb6" (UID: "104f786e-e88e-4e02-916d-7ff0c6b67cb6"). InnerVolumeSpecName "kube-api-access-l8vfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.090680 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vfx\" (UniqueName: \"kubernetes.io/projected/104f786e-e88e-4e02-916d-7ff0c6b67cb6-kube-api-access-l8vfx\") on node \"crc\" DevicePath \"\"" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.138118 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "104f786e-e88e-4e02-916d-7ff0c6b67cb6" (UID: "104f786e-e88e-4e02-916d-7ff0c6b67cb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.193154 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104f786e-e88e-4e02-916d-7ff0c6b67cb6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.379265 4793 generic.go:334] "Generic (PLEG): container finished" podID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerID="daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152" exitCode=0 Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.379314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerDied","Data":"daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152"} Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.379352 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzmj2" event={"ID":"104f786e-e88e-4e02-916d-7ff0c6b67cb6","Type":"ContainerDied","Data":"c5b5f6fb3133e6c96af63b0eec3efc0a30702f4508e13b3e8ee090184708e162"} Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.379372 4793 scope.go:117] "RemoveContainer" containerID="daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.379616 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzmj2" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.415072 4793 scope.go:117] "RemoveContainer" containerID="c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.440846 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzmj2"] Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.452836 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jzmj2"] Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.456045 4793 scope.go:117] "RemoveContainer" containerID="4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.506252 4793 scope.go:117] "RemoveContainer" containerID="daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152" Feb 17 22:09:15 crc kubenswrapper[4793]: E0217 22:09:15.507932 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152\": container with ID starting with daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152 not found: ID does not exist" containerID="daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.507983 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152"} err="failed to get container status \"daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152\": rpc error: code = NotFound desc = could not find container \"daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152\": container with ID starting with daba8cf5d72fbf45e6c59d97681139b2fb1b8b041e1079726cfe7d022c35c152 not found: ID does not exist" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.508017 4793 scope.go:117] "RemoveContainer" containerID="c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed" Feb 17 22:09:15 crc kubenswrapper[4793]: E0217 22:09:15.508445 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed\": container with ID starting with c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed not found: ID does not exist" containerID="c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.508482 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed"} err="failed to get container status \"c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed\": rpc error: code = NotFound desc = could not find container \"c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed\": container with ID starting with c1a06d142579e1442ba39b459aea190e543fd02e9374e7878608e5af8d77bbed not found: ID does not exist" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.508510 4793 scope.go:117] "RemoveContainer" containerID="4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9" Feb 17 22:09:15 crc kubenswrapper[4793]: E0217 22:09:15.508952 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9\": container with ID starting with 4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9 not found: ID does not exist" containerID="4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.508984 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9"} err="failed to get container status \"4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9\": rpc error: code = NotFound desc = could not find container \"4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9\": container with ID starting with 4c8a8c2f4d810c8d1e01ed89e964d827a554c0b7e622606fabdb09d59b0237c9 not found: ID does not exist" Feb 17 22:09:15 crc kubenswrapper[4793]: I0217 22:09:15.550774 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" path="/var/lib/kubelet/pods/104f786e-e88e-4e02-916d-7ff0c6b67cb6/volumes" Feb 17 22:09:20 crc kubenswrapper[4793]: I0217 22:09:20.539454 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:09:20 crc kubenswrapper[4793]: E0217 22:09:20.540492 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:09:26 crc kubenswrapper[4793]: I0217 22:09:26.540503 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:09:26 crc kubenswrapper[4793]: E0217 22:09:26.541475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:09:32 crc kubenswrapper[4793]: I0217 22:09:32.539626 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:09:32 crc kubenswrapper[4793]: E0217 22:09:32.540675 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:09:40 crc kubenswrapper[4793]: I0217 22:09:40.538945 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:09:40 crc kubenswrapper[4793]: E0217 22:09:40.540728 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:09:44 crc kubenswrapper[4793]: I0217 22:09:44.538946 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:09:44 crc kubenswrapper[4793]: E0217 22:09:44.539510 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:09:53 crc kubenswrapper[4793]: I0217 22:09:53.539347 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:09:53 crc kubenswrapper[4793]: E0217 22:09:53.551863 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:09:56 crc kubenswrapper[4793]: I0217 22:09:56.539269 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:09:56 crc kubenswrapper[4793]: E0217 22:09:56.540225 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:10:05 crc kubenswrapper[4793]: I0217 22:10:05.552972 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:10:05 crc kubenswrapper[4793]: E0217 22:10:05.554226 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:10:11 crc kubenswrapper[4793]: I0217 22:10:11.539577 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:10:11 crc kubenswrapper[4793]: E0217 22:10:11.540487 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:10:17 crc kubenswrapper[4793]: I0217 22:10:17.539727 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:10:17 crc kubenswrapper[4793]: E0217 22:10:17.541211 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:10:22 crc kubenswrapper[4793]: I0217 22:10:22.539482 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:10:22 crc kubenswrapper[4793]: E0217 22:10:22.540720 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:10:31 crc kubenswrapper[4793]: I0217 22:10:31.539337 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:10:32 crc kubenswrapper[4793]: I0217 22:10:32.312979 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"63596aeee9e05e35658423cd2de38fad29750852a1574505bc810c7ee2a53c12"} Feb 17 22:10:35 crc kubenswrapper[4793]: I0217 22:10:35.560566 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:10:35 crc kubenswrapper[4793]: E0217 22:10:35.561825 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:10:51 crc kubenswrapper[4793]: I0217 22:10:51.539269 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:10:51 crc kubenswrapper[4793]: E0217 22:10:51.540414 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:11:06 crc kubenswrapper[4793]: I0217 22:11:06.539143 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:11:06 crc kubenswrapper[4793]: E0217 22:11:06.540162 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:11:17 crc kubenswrapper[4793]: I0217 22:11:17.157027 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6d8998fd7c-xvl9z" podUID="c09fdef5-1d53-4792-84d2-3bb953383525" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 22:11:17 crc kubenswrapper[4793]: I0217 22:11:17.539045 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:11:17 crc kubenswrapper[4793]: E0217 22:11:17.539592 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:11:31 crc kubenswrapper[4793]: I0217 22:11:31.549502 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:11:31 crc kubenswrapper[4793]: E0217 22:11:31.550256 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:11:42 crc kubenswrapper[4793]: I0217 22:11:42.538494 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:11:43 crc kubenswrapper[4793]: I0217 22:11:43.119513 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839"} Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.153367 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" exitCode=1 Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.153838 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839"} Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.153890 4793 scope.go:117] "RemoveContainer" containerID="9e949f743af1e47fd05a726b259da77ca2a07f9f991e13bf29d20179308f4e2f" Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.154870 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:11:45 crc kubenswrapper[4793]: E0217 22:11:45.155408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.595801 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.597030 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.597095 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:11:45 crc kubenswrapper[4793]: I0217 22:11:45.597118 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:11:46 crc kubenswrapper[4793]: I0217 22:11:46.171525 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:11:46 crc kubenswrapper[4793]: E0217 22:11:46.172136 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:11:47 crc kubenswrapper[4793]: I0217 22:11:47.184811 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:11:47 crc kubenswrapper[4793]: E0217 22:11:47.185723 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:12:00 crc kubenswrapper[4793]: I0217 22:12:00.539266 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:12:00 crc kubenswrapper[4793]: E0217 22:12:00.539908 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:12:14 crc kubenswrapper[4793]: I0217 22:12:14.541026 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:12:14 crc kubenswrapper[4793]: E0217 22:12:14.542176 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:12:26 crc kubenswrapper[4793]: I0217 22:12:26.539370 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:12:26 crc kubenswrapper[4793]: E0217 22:12:26.540861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:12:38 crc kubenswrapper[4793]: I0217 22:12:38.539446 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:12:38 crc kubenswrapper[4793]: E0217 22:12:38.540438 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.539582 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fq9bp"] Feb 17 22:12:44 crc kubenswrapper[4793]: E0217 22:12:44.540656 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="extract-utilities" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.540677 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="extract-utilities" Feb 17 22:12:44 crc kubenswrapper[4793]: E0217 22:12:44.540803 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="extract-utilities" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.540820 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="extract-utilities" Feb 17 22:12:44 crc kubenswrapper[4793]: E0217 22:12:44.540853 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="registry-server" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.540865 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="registry-server" Feb 17 22:12:44 crc kubenswrapper[4793]: E0217 22:12:44.540896 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="extract-content" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.540907 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="extract-content" Feb 17 22:12:44 crc kubenswrapper[4793]: E0217 22:12:44.540922 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="registry-server" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.540932 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="registry-server" Feb 17 22:12:44 crc kubenswrapper[4793]: E0217 22:12:44.540950 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="extract-content" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.540961 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="extract-content" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.541260 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d61c74-d88e-41ec-9563-0130f0772231" containerName="registry-server" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.541305 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="104f786e-e88e-4e02-916d-7ff0c6b67cb6" containerName="registry-server" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.543843 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.548045 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq9bp"] Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.710537 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46d8\" (UniqueName: \"kubernetes.io/projected/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-kube-api-access-x46d8\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.711145 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-catalog-content\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.711229 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-utilities\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.813535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x46d8\" (UniqueName: \"kubernetes.io/projected/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-kube-api-access-x46d8\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.813793 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-catalog-content\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.814329 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-catalog-content\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.814455 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-utilities\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.814763 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-utilities\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.835008 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46d8\" (UniqueName: \"kubernetes.io/projected/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-kube-api-access-x46d8\") pod \"certified-operators-fq9bp\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:44 crc kubenswrapper[4793]: I0217 22:12:44.872110 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:45 crc kubenswrapper[4793]: I0217 22:12:45.470168 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fq9bp"] Feb 17 22:12:45 crc kubenswrapper[4793]: I0217 22:12:45.856479 4793 generic.go:334] "Generic (PLEG): container finished" podID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerID="8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536" exitCode=0 Feb 17 22:12:45 crc kubenswrapper[4793]: I0217 22:12:45.856610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerDied","Data":"8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536"} Feb 17 22:12:45 crc kubenswrapper[4793]: I0217 22:12:45.856829 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerStarted","Data":"a11a896e2043385aed527bd13aae241f439134242e313082052febad1f614c10"} Feb 17 22:12:45 crc kubenswrapper[4793]: I0217 22:12:45.859465 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:12:47 crc kubenswrapper[4793]: I0217 22:12:47.879844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerStarted","Data":"e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307"} Feb 17 22:12:48 crc kubenswrapper[4793]: I0217 22:12:48.894223 4793 generic.go:334] "Generic (PLEG): container finished" podID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerID="e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307" exitCode=0 Feb 17 22:12:48 crc kubenswrapper[4793]: I0217 22:12:48.894281 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerDied","Data":"e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307"} Feb 17 22:12:49 crc kubenswrapper[4793]: I0217 22:12:49.911209 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerStarted","Data":"3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a"} Feb 17 22:12:49 crc kubenswrapper[4793]: I0217 22:12:49.943811 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fq9bp" podStartSLOduration=2.5304935950000003 podStartE2EDuration="5.943788572s" podCreationTimestamp="2026-02-17 22:12:44 +0000 UTC" firstStartedPulling="2026-02-17 22:12:45.859233718 +0000 UTC m=+7441.150932029" lastFinishedPulling="2026-02-17 22:12:49.272528695 +0000 UTC m=+7444.564227006" observedRunningTime="2026-02-17 22:12:49.941886245 +0000 UTC m=+7445.233584576" watchObservedRunningTime="2026-02-17 22:12:49.943788572 +0000 UTC m=+7445.235486903" Feb 17 22:12:50 crc kubenswrapper[4793]: I0217 22:12:50.101852 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:12:50 crc kubenswrapper[4793]: I0217 22:12:50.101942 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:12:50 crc kubenswrapper[4793]: I0217 22:12:50.539085 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:12:50 crc kubenswrapper[4793]: E0217 22:12:50.539605 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:12:54 crc kubenswrapper[4793]: I0217 22:12:54.873517 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:54 crc kubenswrapper[4793]: I0217 22:12:54.875559 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:54 crc kubenswrapper[4793]: I0217 22:12:54.952492 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:55 crc kubenswrapper[4793]: I0217 22:12:55.057313 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:55 crc kubenswrapper[4793]: I0217 22:12:55.211833 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq9bp"] Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.022514 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fq9bp" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="registry-server" containerID="cri-o://3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a" gracePeriod=2 Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.653630 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.724442 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-utilities\") pod \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.724568 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x46d8\" (UniqueName: \"kubernetes.io/projected/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-kube-api-access-x46d8\") pod \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.724672 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-catalog-content\") pod \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\" (UID: \"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c\") " Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.726408 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-utilities" (OuterVolumeSpecName: "utilities") pod "5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" (UID: "5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.733897 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-kube-api-access-x46d8" (OuterVolumeSpecName: "kube-api-access-x46d8") pod "5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" (UID: "5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c"). InnerVolumeSpecName "kube-api-access-x46d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.806932 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" (UID: "5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.826998 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.827245 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x46d8\" (UniqueName: \"kubernetes.io/projected/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-kube-api-access-x46d8\") on node \"crc\" DevicePath \"\"" Feb 17 22:12:57 crc kubenswrapper[4793]: I0217 22:12:57.827326 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.038596 4793 generic.go:334] "Generic (PLEG): container finished" podID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerID="3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a" exitCode=0 Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.038647 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerDied","Data":"3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a"} Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.038680 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fq9bp" event={"ID":"5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c","Type":"ContainerDied","Data":"a11a896e2043385aed527bd13aae241f439134242e313082052febad1f614c10"} Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.038736 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fq9bp" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.038762 4793 scope.go:117] "RemoveContainer" containerID="3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.076745 4793 scope.go:117] "RemoveContainer" containerID="e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.088852 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fq9bp"] Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.128869 4793 scope.go:117] "RemoveContainer" containerID="8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.138360 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fq9bp"] Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.248800 4793 scope.go:117] "RemoveContainer" containerID="3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a" Feb 17 22:12:58 crc kubenswrapper[4793]: E0217 22:12:58.251838 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a\": container with ID starting with 3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a not found: ID does not exist" containerID="3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.251882 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a"} err="failed to get container status \"3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a\": rpc error: code = NotFound desc = could not find container \"3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a\": container with ID starting with 3d6b77c6371d60c9d00d81f8714e15b49c958090dea8fc654938baf942b02e1a not found: ID does not exist" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.251910 4793 scope.go:117] "RemoveContainer" containerID="e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307" Feb 17 22:12:58 crc kubenswrapper[4793]: E0217 22:12:58.258829 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307\": container with ID starting with e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307 not found: ID does not exist" containerID="e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.258871 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307"} err="failed to get container status \"e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307\": rpc error: code = NotFound desc = could not find container \"e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307\": container with ID starting with e0f7756ba0ccbc7ec7d9da7a83522e666f3097d3848e064f484216df45ba4307 not found: ID does not exist" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.258897 4793 scope.go:117] "RemoveContainer" containerID="8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536" Feb 17 22:12:58 crc kubenswrapper[4793]: E0217 22:12:58.265847 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536\": container with ID starting with 8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536 not found: ID does not exist" containerID="8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536" Feb 17 22:12:58 crc kubenswrapper[4793]: I0217 22:12:58.265893 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536"} err="failed to get container status \"8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536\": rpc error: code = NotFound desc = could not find container \"8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536\": container with ID starting with 8a9152e525e21d98dd01c1c74b98535bad746f32f98db08a0fe375fe9114f536 not found: ID does not exist" Feb 17 22:12:59 crc kubenswrapper[4793]: I0217 22:12:59.562509 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" path="/var/lib/kubelet/pods/5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c/volumes" Feb 17 22:13:01 crc kubenswrapper[4793]: I0217 22:13:01.539235 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:13:01 crc kubenswrapper[4793]: E0217 22:13:01.539701 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:13:16 crc kubenswrapper[4793]: I0217 22:13:16.538822 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:13:16 crc kubenswrapper[4793]: E0217 22:13:16.539634 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:13:20 crc kubenswrapper[4793]: I0217 22:13:20.102266 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:13:20 crc kubenswrapper[4793]: I0217 22:13:20.103648 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:13:29 crc kubenswrapper[4793]: I0217 22:13:29.538948 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:13:29 crc kubenswrapper[4793]: E0217 22:13:29.539769 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:13:44 crc kubenswrapper[4793]: I0217 22:13:44.539653 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:13:44 crc kubenswrapper[4793]: E0217 22:13:44.540430 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.102036 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.102654 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.102736 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.103930 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63596aeee9e05e35658423cd2de38fad29750852a1574505bc810c7ee2a53c12"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.104027 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://63596aeee9e05e35658423cd2de38fad29750852a1574505bc810c7ee2a53c12" gracePeriod=600 Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.701274 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="63596aeee9e05e35658423cd2de38fad29750852a1574505bc810c7ee2a53c12" exitCode=0 Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.701346 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"63596aeee9e05e35658423cd2de38fad29750852a1574505bc810c7ee2a53c12"} Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.701974 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f"} Feb 17 22:13:50 crc kubenswrapper[4793]: I0217 22:13:50.702015 4793 scope.go:117] "RemoveContainer" containerID="5ed6acf5ccd1c96205f3ec1b1da4fb06c9c4a63f00f0514d2eccd40f24e4caea" Feb 17 22:13:57 crc kubenswrapper[4793]: I0217 22:13:57.540157 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:13:57 crc kubenswrapper[4793]: E0217 22:13:57.541216 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:14:11 crc kubenswrapper[4793]: I0217 22:14:11.538585 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:14:11 crc kubenswrapper[4793]: E0217 22:14:11.539460 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:14:25 crc kubenswrapper[4793]: I0217 22:14:25.553945 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:14:25 crc kubenswrapper[4793]: E0217 22:14:25.555143 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:14:39 crc kubenswrapper[4793]: I0217 22:14:39.540463 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:14:39 crc kubenswrapper[4793]: E0217 22:14:39.541729 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:14:52 crc kubenswrapper[4793]: I0217 22:14:52.539992 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:14:52 crc kubenswrapper[4793]: E0217 22:14:52.541767 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.179361 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q"] Feb 17 22:15:00 crc kubenswrapper[4793]: E0217 22:15:00.182719 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="extract-content" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.182823 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="extract-content" Feb 17 22:15:00 crc kubenswrapper[4793]: E0217 22:15:00.182959 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="registry-server" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.183049 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="registry-server" Feb 17 22:15:00 crc kubenswrapper[4793]: E0217 22:15:00.183143 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="extract-utilities" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.183202 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="extract-utilities" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.183527 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5efb96c4-0705-4eb0-b0b9-d9bf9cc4958c" containerName="registry-server" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.184836 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.188103 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.188265 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.191959 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q"] Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.334719 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-config-volume\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.334803 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-secret-volume\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.335028 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwn8\" (UniqueName: \"kubernetes.io/projected/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-kube-api-access-wbwn8\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.437273 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-config-volume\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.437339 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-secret-volume\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.437378 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbwn8\" (UniqueName: \"kubernetes.io/projected/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-kube-api-access-wbwn8\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.438488 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-config-volume\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.448593 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-secret-volume\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.455664 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbwn8\" (UniqueName: \"kubernetes.io/projected/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-kube-api-access-wbwn8\") pod \"collect-profiles-29522775-zjf6q\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.539775 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:00 crc kubenswrapper[4793]: I0217 22:15:00.976826 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q"] Feb 17 22:15:01 crc kubenswrapper[4793]: I0217 22:15:01.487949 4793 generic.go:334] "Generic (PLEG): container finished" podID="9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" containerID="54a26d4eae1e40742759e55999dab60891a3e4e6ffaf05ae02915eef645cf154" exitCode=0 Feb 17 22:15:01 crc kubenswrapper[4793]: I0217 22:15:01.487993 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" event={"ID":"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc","Type":"ContainerDied","Data":"54a26d4eae1e40742759e55999dab60891a3e4e6ffaf05ae02915eef645cf154"} Feb 17 22:15:01 crc kubenswrapper[4793]: I0217 22:15:01.488268 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" event={"ID":"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc","Type":"ContainerStarted","Data":"300e624e55b5080eddadaa8c29fc559263cce04baf18f82bc328c688589cff6d"} Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.876456 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.991020 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbwn8\" (UniqueName: \"kubernetes.io/projected/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-kube-api-access-wbwn8\") pod \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.991110 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-config-volume\") pod \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.991326 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-secret-volume\") pod \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\" (UID: \"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc\") " Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.992214 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" (UID: "9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.997301 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" (UID: "9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:15:02 crc kubenswrapper[4793]: I0217 22:15:02.997799 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-kube-api-access-wbwn8" (OuterVolumeSpecName: "kube-api-access-wbwn8") pod "9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" (UID: "9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc"). InnerVolumeSpecName "kube-api-access-wbwn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.094359 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.094405 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbwn8\" (UniqueName: \"kubernetes.io/projected/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-kube-api-access-wbwn8\") on node \"crc\" DevicePath \"\"" Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.094420 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.523902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" event={"ID":"9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc","Type":"ContainerDied","Data":"300e624e55b5080eddadaa8c29fc559263cce04baf18f82bc328c688589cff6d"} Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.523985 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300e624e55b5080eddadaa8c29fc559263cce04baf18f82bc328c688589cff6d" Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.524100 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q" Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.981100 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6"] Feb 17 22:15:03 crc kubenswrapper[4793]: I0217 22:15:03.989780 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522730-kc8k6"] Feb 17 22:15:05 crc kubenswrapper[4793]: I0217 22:15:05.547006 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:15:05 crc kubenswrapper[4793]: E0217 22:15:05.547513 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:15:05 crc kubenswrapper[4793]: I0217 22:15:05.554038 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1968df33-d74f-4502-9da2-ecfc097040fb" path="/var/lib/kubelet/pods/1968df33-d74f-4502-9da2-ecfc097040fb/volumes" Feb 17 22:15:17 crc kubenswrapper[4793]: I0217 22:15:17.539830 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:15:17 crc kubenswrapper[4793]: E0217 22:15:17.540723 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:15:26 crc kubenswrapper[4793]: I0217 22:15:26.527837 4793 scope.go:117] "RemoveContainer" containerID="4d4027c0976f858d7670fb080f5586c92d9f3402b088880c3176fa05cb511a5b" Feb 17 22:15:30 crc kubenswrapper[4793]: I0217 22:15:30.539991 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:15:30 crc kubenswrapper[4793]: E0217 22:15:30.541098 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:15:41 crc kubenswrapper[4793]: I0217 22:15:41.538581 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:15:41 crc kubenswrapper[4793]: E0217 22:15:41.539296 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:15:50 crc kubenswrapper[4793]: I0217 22:15:50.101775 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:15:50 crc kubenswrapper[4793]: I0217 22:15:50.102524 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:15:55 crc kubenswrapper[4793]: I0217 22:15:55.547822 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:15:55 crc kubenswrapper[4793]: E0217 22:15:55.548791 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:16:06 crc kubenswrapper[4793]: I0217 22:16:06.538683 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:16:06 crc kubenswrapper[4793]: E0217 22:16:06.539485 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:16:20 crc kubenswrapper[4793]: I0217 22:16:20.101655 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:16:20 crc kubenswrapper[4793]: I0217 22:16:20.102602 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:16:20 crc kubenswrapper[4793]: I0217 22:16:20.539816 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:16:20 crc kubenswrapper[4793]: E0217 22:16:20.540331 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:16:35 crc kubenswrapper[4793]: I0217 22:16:35.551529 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:16:35 crc kubenswrapper[4793]: E0217 22:16:35.552665 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:16:48 crc kubenswrapper[4793]: I0217 22:16:48.539665 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:16:49 crc kubenswrapper[4793]: I0217 22:16:49.774417 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca"} Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.102520 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.102975 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.103042 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.104231 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.104340 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" gracePeriod=600 Feb 17 22:16:50 crc kubenswrapper[4793]: E0217 22:16:50.229552 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.596242 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.798112 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" exitCode=0 Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.798324 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f"} Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.798407 4793 scope.go:117] "RemoveContainer" containerID="63596aeee9e05e35658423cd2de38fad29750852a1574505bc810c7ee2a53c12" Feb 17 22:16:50 crc kubenswrapper[4793]: I0217 22:16:50.800497 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:16:50 crc kubenswrapper[4793]: E0217 22:16:50.807331 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:16:51 crc kubenswrapper[4793]: I0217 22:16:51.810672 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" exitCode=1 Feb 17 22:16:51 crc kubenswrapper[4793]: I0217 22:16:51.810773 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca"} Feb 17 22:16:51 crc kubenswrapper[4793]: I0217 22:16:51.811212 4793 scope.go:117] "RemoveContainer" containerID="d70f46adaeffa51a091d0106020fcc7b6682f081aafec3338c7586688366d839" Feb 17 22:16:51 crc kubenswrapper[4793]: I0217 22:16:51.812146 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:16:51 crc kubenswrapper[4793]: E0217 22:16:51.812474 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:16:55 crc kubenswrapper[4793]: I0217 22:16:55.596841 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:16:55 crc kubenswrapper[4793]: I0217 22:16:55.597386 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:16:55 crc kubenswrapper[4793]: I0217 22:16:55.597397 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:16:55 crc kubenswrapper[4793]: I0217 22:16:55.598298 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:16:55 crc kubenswrapper[4793]: E0217 22:16:55.598586 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:17:02 crc kubenswrapper[4793]: I0217 22:17:02.540060 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:17:02 crc kubenswrapper[4793]: E0217 22:17:02.541264 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:17:09 crc kubenswrapper[4793]: I0217 22:17:09.539257 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:17:09 crc kubenswrapper[4793]: E0217 22:17:09.539888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:17:13 crc kubenswrapper[4793]: I0217 22:17:13.538888 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:17:13 crc kubenswrapper[4793]: E0217 22:17:13.539599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:17:24 crc kubenswrapper[4793]: I0217 22:17:24.538807 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:17:24 crc kubenswrapper[4793]: E0217 22:17:24.539396 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:17:24 crc kubenswrapper[4793]: I0217 22:17:24.539431 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:17:24 crc kubenswrapper[4793]: E0217 22:17:24.539757 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:17:35 crc kubenswrapper[4793]: I0217 22:17:35.555000 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:17:35 crc kubenswrapper[4793]: E0217 22:17:35.556442 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:17:38 crc kubenswrapper[4793]: I0217 22:17:38.539329 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:17:38 crc kubenswrapper[4793]: E0217 22:17:38.540152 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:17:49 crc kubenswrapper[4793]: I0217 22:17:49.539711 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:17:49 crc kubenswrapper[4793]: E0217 22:17:49.540426 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:17:53 crc kubenswrapper[4793]: I0217 22:17:53.542011 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:17:53 crc kubenswrapper[4793]: E0217 22:17:53.542528 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:18:00 crc kubenswrapper[4793]: I0217 22:18:00.539225 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:18:00 crc kubenswrapper[4793]: E0217 22:18:00.540359 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:18:06 crc kubenswrapper[4793]: I0217 22:18:06.539243 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:18:06 crc kubenswrapper[4793]: E0217 22:18:06.540295 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:18:12 crc kubenswrapper[4793]: I0217 22:18:12.539494 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:18:12 crc kubenswrapper[4793]: E0217 22:18:12.540489 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.028092 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pbmh8"] Feb 17 22:18:17 crc kubenswrapper[4793]: E0217 22:18:17.029621 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" containerName="collect-profiles" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.029648 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" containerName="collect-profiles" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.030141 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" containerName="collect-profiles" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.033029 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.050334 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbmh8"] Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.143526 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzrm\" (UniqueName: \"kubernetes.io/projected/6391c7a9-27ec-49e0-8d70-f94cd3e76474-kube-api-access-wzzrm\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.143578 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-catalog-content\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.143608 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-utilities\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.245267 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzrm\" (UniqueName: \"kubernetes.io/projected/6391c7a9-27ec-49e0-8d70-f94cd3e76474-kube-api-access-wzzrm\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.245312 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-catalog-content\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.245337 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-utilities\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.246256 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-catalog-content\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.246349 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-utilities\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.274507 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzrm\" (UniqueName: \"kubernetes.io/projected/6391c7a9-27ec-49e0-8d70-f94cd3e76474-kube-api-access-wzzrm\") pod \"redhat-marketplace-pbmh8\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.367436 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:17 crc kubenswrapper[4793]: I0217 22:18:17.899871 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbmh8"] Feb 17 22:18:18 crc kubenswrapper[4793]: I0217 22:18:18.196312 4793 generic.go:334] "Generic (PLEG): container finished" podID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerID="962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac" exitCode=0 Feb 17 22:18:18 crc kubenswrapper[4793]: I0217 22:18:18.196362 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerDied","Data":"962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac"} Feb 17 22:18:18 crc kubenswrapper[4793]: I0217 22:18:18.196392 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerStarted","Data":"2a8121089a3191ab9cd28258e2ee3192d52f5744eadfd60446a35b111040f45c"} Feb 17 22:18:18 crc kubenswrapper[4793]: I0217 22:18:18.198644 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:18:19 crc kubenswrapper[4793]: I0217 22:18:19.209839 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerStarted","Data":"5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f"} Feb 17 22:18:19 crc kubenswrapper[4793]: I0217 22:18:19.539714 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:18:19 crc kubenswrapper[4793]: E0217 22:18:19.540061 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:18:20 crc kubenswrapper[4793]: I0217 22:18:20.223561 4793 generic.go:334] "Generic (PLEG): container finished" podID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerID="5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f" exitCode=0 Feb 17 22:18:20 crc kubenswrapper[4793]: I0217 22:18:20.223618 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerDied","Data":"5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f"} Feb 17 22:18:21 crc kubenswrapper[4793]: I0217 22:18:21.239597 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerStarted","Data":"0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d"} Feb 17 22:18:21 crc kubenswrapper[4793]: I0217 22:18:21.271326 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pbmh8" podStartSLOduration=2.8512406390000002 podStartE2EDuration="5.271298404s" podCreationTimestamp="2026-02-17 22:18:16 +0000 UTC" firstStartedPulling="2026-02-17 22:18:18.198216384 +0000 UTC m=+7773.489914725" lastFinishedPulling="2026-02-17 22:18:20.618274169 +0000 UTC m=+7775.909972490" observedRunningTime="2026-02-17 22:18:21.262386994 +0000 UTC m=+7776.554085325" watchObservedRunningTime="2026-02-17 22:18:21.271298404 +0000 UTC m=+7776.562996755" Feb 17 22:18:23 crc kubenswrapper[4793]: I0217 22:18:23.539521 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:18:23 crc kubenswrapper[4793]: E0217 22:18:23.540968 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:18:27 crc kubenswrapper[4793]: I0217 22:18:27.368113 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:27 crc kubenswrapper[4793]: I0217 22:18:27.368855 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:27 crc kubenswrapper[4793]: I0217 22:18:27.439739 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:28 crc kubenswrapper[4793]: I0217 22:18:28.398937 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:28 crc kubenswrapper[4793]: I0217 22:18:28.459908 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbmh8"] Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.328354 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pbmh8" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="registry-server" containerID="cri-o://0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d" gracePeriod=2 Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.550820 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:18:30 crc kubenswrapper[4793]: E0217 22:18:30.551393 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.894398 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.959960 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-utilities\") pod \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.960158 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzzrm\" (UniqueName: \"kubernetes.io/projected/6391c7a9-27ec-49e0-8d70-f94cd3e76474-kube-api-access-wzzrm\") pod \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.960248 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-catalog-content\") pod \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\" (UID: \"6391c7a9-27ec-49e0-8d70-f94cd3e76474\") " Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.960885 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-utilities" (OuterVolumeSpecName: "utilities") pod "6391c7a9-27ec-49e0-8d70-f94cd3e76474" (UID: "6391c7a9-27ec-49e0-8d70-f94cd3e76474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.981122 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6391c7a9-27ec-49e0-8d70-f94cd3e76474-kube-api-access-wzzrm" (OuterVolumeSpecName: "kube-api-access-wzzrm") pod "6391c7a9-27ec-49e0-8d70-f94cd3e76474" (UID: "6391c7a9-27ec-49e0-8d70-f94cd3e76474"). InnerVolumeSpecName "kube-api-access-wzzrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:18:30 crc kubenswrapper[4793]: I0217 22:18:30.982136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6391c7a9-27ec-49e0-8d70-f94cd3e76474" (UID: "6391c7a9-27ec-49e0-8d70-f94cd3e76474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.064507 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.064988 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzzrm\" (UniqueName: \"kubernetes.io/projected/6391c7a9-27ec-49e0-8d70-f94cd3e76474-kube-api-access-wzzrm\") on node \"crc\" DevicePath \"\"" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.065024 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6391c7a9-27ec-49e0-8d70-f94cd3e76474-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.342358 4793 generic.go:334] "Generic (PLEG): container finished" podID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerID="0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d" exitCode=0 Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.342424 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerDied","Data":"0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d"} Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.343620 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pbmh8" event={"ID":"6391c7a9-27ec-49e0-8d70-f94cd3e76474","Type":"ContainerDied","Data":"2a8121089a3191ab9cd28258e2ee3192d52f5744eadfd60446a35b111040f45c"} Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.342443 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pbmh8" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.343680 4793 scope.go:117] "RemoveContainer" containerID="0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.380820 4793 scope.go:117] "RemoveContainer" containerID="5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.388206 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbmh8"] Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.411340 4793 scope.go:117] "RemoveContainer" containerID="962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.412441 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pbmh8"] Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.472389 4793 scope.go:117] "RemoveContainer" containerID="0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d" Feb 17 22:18:31 crc kubenswrapper[4793]: E0217 22:18:31.473057 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d\": container with ID starting with 0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d not found: ID does not exist" containerID="0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.473112 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d"} err="failed to get container status \"0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d\": rpc error: code = NotFound desc = could not find container \"0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d\": container with ID starting with 0b903fe70bef1cc0394c483454d07fea3d0eec2574548783ab4b8c63f69fe14d not found: ID does not exist" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.473145 4793 scope.go:117] "RemoveContainer" containerID="5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f" Feb 17 22:18:31 crc kubenswrapper[4793]: E0217 22:18:31.473663 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f\": container with ID starting with 5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f not found: ID does not exist" containerID="5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.473721 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f"} err="failed to get container status \"5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f\": rpc error: code = NotFound desc = could not find container \"5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f\": container with ID starting with 5b476fdf51506bbd68c2b2d55a0401dd29011a6aab2ca513a22dc8b5ed1cfe1f not found: ID does not exist" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.473750 4793 scope.go:117] "RemoveContainer" containerID="962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac" Feb 17 22:18:31 crc kubenswrapper[4793]: E0217 22:18:31.474159 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac\": container with ID starting with 962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac not found: ID does not exist" containerID="962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.474211 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac"} err="failed to get container status \"962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac\": rpc error: code = NotFound desc = could not find container \"962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac\": container with ID starting with 962cf887e6203fd34515487f01b551d3cab3798a0f845742835d682102d982ac not found: ID does not exist" Feb 17 22:18:31 crc kubenswrapper[4793]: I0217 22:18:31.551262 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" path="/var/lib/kubelet/pods/6391c7a9-27ec-49e0-8d70-f94cd3e76474/volumes" Feb 17 22:18:38 crc kubenswrapper[4793]: I0217 22:18:38.539368 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:18:38 crc kubenswrapper[4793]: E0217 22:18:38.540452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:18:45 crc kubenswrapper[4793]: I0217 22:18:45.549397 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:18:45 crc kubenswrapper[4793]: E0217 22:18:45.550617 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:18:49 crc kubenswrapper[4793]: I0217 22:18:49.538927 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:18:49 crc kubenswrapper[4793]: E0217 22:18:49.539628 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:18:56 crc kubenswrapper[4793]: I0217 22:18:56.539473 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:18:56 crc kubenswrapper[4793]: E0217 22:18:56.540317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:19:04 crc kubenswrapper[4793]: I0217 22:19:04.542786 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:19:04 crc kubenswrapper[4793]: E0217 22:19:04.543779 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:19:08 crc kubenswrapper[4793]: I0217 22:19:08.539852 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:19:08 crc kubenswrapper[4793]: E0217 22:19:08.540643 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:19:17 crc kubenswrapper[4793]: I0217 22:19:17.539238 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:19:17 crc kubenswrapper[4793]: E0217 22:19:17.540420 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:19:22 crc kubenswrapper[4793]: I0217 22:19:22.539120 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:19:22 crc kubenswrapper[4793]: E0217 22:19:22.540179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:19:32 crc kubenswrapper[4793]: I0217 22:19:32.539186 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:19:32 crc kubenswrapper[4793]: E0217 22:19:32.540408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.063118 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tt2hx"] Feb 17 22:19:35 crc kubenswrapper[4793]: E0217 22:19:35.064451 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="registry-server" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.064472 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="registry-server" Feb 17 22:19:35 crc kubenswrapper[4793]: E0217 22:19:35.064510 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="extract-content" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.064520 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="extract-content" Feb 17 22:19:35 crc kubenswrapper[4793]: E0217 22:19:35.064555 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="extract-utilities" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.064567 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="extract-utilities" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.064881 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6391c7a9-27ec-49e0-8d70-f94cd3e76474" containerName="registry-server" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.067225 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.080446 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tt2hx"] Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.181723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-catalog-content\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.181881 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-utilities\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.182056 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67hhr\" (UniqueName: \"kubernetes.io/projected/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-kube-api-access-67hhr\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.283942 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67hhr\" (UniqueName: \"kubernetes.io/projected/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-kube-api-access-67hhr\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.284126 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-catalog-content\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.284239 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-utilities\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.284775 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-catalog-content\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.285068 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-utilities\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.314312 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67hhr\" (UniqueName: \"kubernetes.io/projected/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-kube-api-access-67hhr\") pod \"community-operators-tt2hx\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.412258 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:35 crc kubenswrapper[4793]: I0217 22:19:35.981224 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tt2hx"] Feb 17 22:19:35 crc kubenswrapper[4793]: W0217 22:19:35.985098 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e4d3e8b_8f53_40cb_a781_2973d93b6ef3.slice/crio-13b6325659e52943545af373024f5a03e52fcc7027918058fa6c25270d736403 WatchSource:0}: Error finding container 13b6325659e52943545af373024f5a03e52fcc7027918058fa6c25270d736403: Status 404 returned error can't find the container with id 13b6325659e52943545af373024f5a03e52fcc7027918058fa6c25270d736403 Feb 17 22:19:36 crc kubenswrapper[4793]: I0217 22:19:36.092059 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerStarted","Data":"13b6325659e52943545af373024f5a03e52fcc7027918058fa6c25270d736403"} Feb 17 22:19:36 crc kubenswrapper[4793]: I0217 22:19:36.540395 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:19:36 crc kubenswrapper[4793]: E0217 22:19:36.541664 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:19:37 crc kubenswrapper[4793]: I0217 22:19:37.105408 4793 generic.go:334] "Generic (PLEG): container finished" podID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerID="35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931" exitCode=0 Feb 17 22:19:37 crc kubenswrapper[4793]: I0217 22:19:37.105887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerDied","Data":"35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931"} Feb 17 22:19:38 crc kubenswrapper[4793]: I0217 22:19:38.121116 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerStarted","Data":"2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db"} Feb 17 22:19:39 crc kubenswrapper[4793]: I0217 22:19:39.137249 4793 generic.go:334] "Generic (PLEG): container finished" podID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerID="2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db" exitCode=0 Feb 17 22:19:39 crc kubenswrapper[4793]: I0217 22:19:39.137316 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerDied","Data":"2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db"} Feb 17 22:19:40 crc kubenswrapper[4793]: I0217 22:19:40.152900 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerStarted","Data":"02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd"} Feb 17 22:19:40 crc kubenswrapper[4793]: I0217 22:19:40.196184 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tt2hx" podStartSLOduration=2.778583484 podStartE2EDuration="5.196158649s" podCreationTimestamp="2026-02-17 22:19:35 +0000 UTC" firstStartedPulling="2026-02-17 22:19:37.108427997 +0000 UTC m=+7852.400126348" lastFinishedPulling="2026-02-17 22:19:39.526003192 +0000 UTC m=+7854.817701513" observedRunningTime="2026-02-17 22:19:40.176538155 +0000 UTC m=+7855.468236476" watchObservedRunningTime="2026-02-17 22:19:40.196158649 +0000 UTC m=+7855.487857000" Feb 17 22:19:45 crc kubenswrapper[4793]: I0217 22:19:45.413154 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:45 crc kubenswrapper[4793]: I0217 22:19:45.413842 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:45 crc kubenswrapper[4793]: I0217 22:19:45.502661 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:46 crc kubenswrapper[4793]: I0217 22:19:46.305985 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:46 crc kubenswrapper[4793]: I0217 22:19:46.382806 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tt2hx"] Feb 17 22:19:46 crc kubenswrapper[4793]: I0217 22:19:46.539222 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:19:46 crc kubenswrapper[4793]: E0217 22:19:46.539657 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.243013 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tt2hx" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="registry-server" containerID="cri-o://02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd" gracePeriod=2 Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.539224 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:19:48 crc kubenswrapper[4793]: E0217 22:19:48.539754 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.711777 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.819648 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-utilities\") pod \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.819729 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67hhr\" (UniqueName: \"kubernetes.io/projected/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-kube-api-access-67hhr\") pod \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.819810 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-catalog-content\") pod \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\" (UID: \"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3\") " Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.820386 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-utilities" (OuterVolumeSpecName: "utilities") pod "5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" (UID: "5e4d3e8b-8f53-40cb-a781-2973d93b6ef3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.829337 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-kube-api-access-67hhr" (OuterVolumeSpecName: "kube-api-access-67hhr") pod "5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" (UID: "5e4d3e8b-8f53-40cb-a781-2973d93b6ef3"). InnerVolumeSpecName "kube-api-access-67hhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.875497 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" (UID: "5e4d3e8b-8f53-40cb-a781-2973d93b6ef3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.922315 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67hhr\" (UniqueName: \"kubernetes.io/projected/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-kube-api-access-67hhr\") on node \"crc\" DevicePath \"\"" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.922351 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:19:48 crc kubenswrapper[4793]: I0217 22:19:48.922366 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.258529 4793 generic.go:334] "Generic (PLEG): container finished" podID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerID="02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd" exitCode=0 Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.258572 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt2hx" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.258598 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerDied","Data":"02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd"} Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.258654 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt2hx" event={"ID":"5e4d3e8b-8f53-40cb-a781-2973d93b6ef3","Type":"ContainerDied","Data":"13b6325659e52943545af373024f5a03e52fcc7027918058fa6c25270d736403"} Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.258707 4793 scope.go:117] "RemoveContainer" containerID="02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.298148 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tt2hx"] Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.300933 4793 scope.go:117] "RemoveContainer" containerID="2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.309395 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tt2hx"] Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.333466 4793 scope.go:117] "RemoveContainer" containerID="35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.369134 4793 scope.go:117] "RemoveContainer" containerID="02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd" Feb 17 22:19:49 crc kubenswrapper[4793]: E0217 22:19:49.369913 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd\": container with ID starting with 02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd not found: ID does not exist" containerID="02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.369944 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd"} err="failed to get container status \"02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd\": rpc error: code = NotFound desc = could not find container \"02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd\": container with ID starting with 02e648a062d27b3e65f24fe756ccf7aeeef62a4656c6f830ce0103f8ffca74fd not found: ID does not exist" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.369967 4793 scope.go:117] "RemoveContainer" containerID="2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db" Feb 17 22:19:49 crc kubenswrapper[4793]: E0217 22:19:49.370349 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db\": container with ID starting with 2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db not found: ID does not exist" containerID="2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.370380 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db"} err="failed to get container status \"2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db\": rpc error: code = NotFound desc = could not find container \"2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db\": container with ID starting with 2eff25277755b0a1d11fd1fd2044bbe6bead5b5874f880f03462fd7e6eb1e2db not found: ID does not exist" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.370394 4793 scope.go:117] "RemoveContainer" containerID="35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931" Feb 17 22:19:49 crc kubenswrapper[4793]: E0217 22:19:49.370752 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931\": container with ID starting with 35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931 not found: ID does not exist" containerID="35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.370773 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931"} err="failed to get container status \"35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931\": rpc error: code = NotFound desc = could not find container \"35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931\": container with ID starting with 35bc1ff89fd04b00bab67cafb28c81299566d5819155064d023801d111b48931 not found: ID does not exist" Feb 17 22:19:49 crc kubenswrapper[4793]: I0217 22:19:49.554726 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" path="/var/lib/kubelet/pods/5e4d3e8b-8f53-40cb-a781-2973d93b6ef3/volumes" Feb 17 22:19:59 crc kubenswrapper[4793]: I0217 22:19:59.538949 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:19:59 crc kubenswrapper[4793]: E0217 22:19:59.539923 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:19:59 crc kubenswrapper[4793]: I0217 22:19:59.540126 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:19:59 crc kubenswrapper[4793]: E0217 22:19:59.540362 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:20:10 crc kubenswrapper[4793]: I0217 22:20:10.541213 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:20:10 crc kubenswrapper[4793]: I0217 22:20:10.541951 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:20:10 crc kubenswrapper[4793]: E0217 22:20:10.542154 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:20:10 crc kubenswrapper[4793]: E0217 22:20:10.542275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:20:23 crc kubenswrapper[4793]: I0217 22:20:23.538710 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:20:23 crc kubenswrapper[4793]: E0217 22:20:23.539475 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:20:24 crc kubenswrapper[4793]: I0217 22:20:24.539520 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:20:24 crc kubenswrapper[4793]: E0217 22:20:24.540276 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:20:35 crc kubenswrapper[4793]: I0217 22:20:35.551378 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:20:35 crc kubenswrapper[4793]: E0217 22:20:35.552134 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:20:38 crc kubenswrapper[4793]: I0217 22:20:38.539140 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:20:38 crc kubenswrapper[4793]: E0217 22:20:38.539981 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:20:47 crc kubenswrapper[4793]: I0217 22:20:47.539677 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:20:47 crc kubenswrapper[4793]: E0217 22:20:47.540716 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:20:50 crc kubenswrapper[4793]: I0217 22:20:50.539763 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:20:50 crc kubenswrapper[4793]: E0217 22:20:50.541022 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:20:58 crc kubenswrapper[4793]: I0217 22:20:58.539844 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:20:58 crc kubenswrapper[4793]: E0217 22:20:58.541359 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:21:01 crc kubenswrapper[4793]: I0217 22:21:01.539920 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:21:01 crc kubenswrapper[4793]: E0217 22:21:01.541095 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:21:09 crc kubenswrapper[4793]: I0217 22:21:09.546433 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:21:09 crc kubenswrapper[4793]: E0217 22:21:09.547257 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:21:16 crc kubenswrapper[4793]: I0217 22:21:16.539166 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:21:16 crc kubenswrapper[4793]: E0217 22:21:16.539953 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:21:23 crc kubenswrapper[4793]: I0217 22:21:23.539356 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:21:23 crc kubenswrapper[4793]: E0217 22:21:23.541088 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:21:28 crc kubenswrapper[4793]: I0217 22:21:28.538924 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:21:28 crc kubenswrapper[4793]: E0217 22:21:28.541980 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:21:34 crc kubenswrapper[4793]: I0217 22:21:34.539156 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:21:34 crc kubenswrapper[4793]: E0217 22:21:34.540027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:21:41 crc kubenswrapper[4793]: I0217 22:21:41.538630 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:21:41 crc kubenswrapper[4793]: E0217 22:21:41.539463 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:21:48 crc kubenswrapper[4793]: I0217 22:21:48.539168 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:21:48 crc kubenswrapper[4793]: E0217 22:21:48.540034 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:21:54 crc kubenswrapper[4793]: I0217 22:21:54.539236 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:21:55 crc kubenswrapper[4793]: I0217 22:21:55.771140 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"f4981ab52d850aaca9dc626601ec76dbebfd4955c3e8a2001dcd9e38e1c0e1fb"} Feb 17 22:21:59 crc kubenswrapper[4793]: I0217 22:21:59.542641 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:21:59 crc kubenswrapper[4793]: I0217 22:21:59.811292 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d"} Feb 17 22:22:00 crc kubenswrapper[4793]: I0217 22:22:00.596931 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:22:02 crc kubenswrapper[4793]: I0217 22:22:02.857232 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" exitCode=1 Feb 17 22:22:02 crc kubenswrapper[4793]: I0217 22:22:02.857496 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d"} Feb 17 22:22:02 crc kubenswrapper[4793]: I0217 22:22:02.857818 4793 scope.go:117] "RemoveContainer" containerID="0ae41249735de3a3bf3b373783b225bae30903e29f0c434704ed73e28711bfca" Feb 17 22:22:02 crc kubenswrapper[4793]: I0217 22:22:02.862986 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:22:02 crc kubenswrapper[4793]: E0217 22:22:02.864540 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:22:05 crc kubenswrapper[4793]: I0217 22:22:05.596822 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:22:05 crc kubenswrapper[4793]: I0217 22:22:05.597561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:22:05 crc kubenswrapper[4793]: I0217 22:22:05.597586 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:22:05 crc kubenswrapper[4793]: I0217 22:22:05.598366 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:22:05 crc kubenswrapper[4793]: E0217 22:22:05.598675 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:22:16 crc kubenswrapper[4793]: I0217 22:22:16.539536 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:22:16 crc kubenswrapper[4793]: E0217 22:22:16.540488 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:22:31 crc kubenswrapper[4793]: I0217 22:22:31.539270 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:22:31 crc kubenswrapper[4793]: E0217 22:22:31.540003 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:22:42 crc kubenswrapper[4793]: I0217 22:22:42.539680 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:22:42 crc kubenswrapper[4793]: E0217 22:22:42.541019 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:22:56 crc kubenswrapper[4793]: I0217 22:22:56.538967 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:22:56 crc kubenswrapper[4793]: E0217 22:22:56.539652 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:23:08 crc kubenswrapper[4793]: I0217 22:23:08.539881 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:23:08 crc kubenswrapper[4793]: E0217 22:23:08.540964 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:23:23 crc kubenswrapper[4793]: I0217 22:23:23.538734 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:23:23 crc kubenswrapper[4793]: E0217 22:23:23.539993 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.530482 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s4sw4"] Feb 17 22:23:29 crc kubenswrapper[4793]: E0217 22:23:29.531494 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="registry-server" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.531506 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="registry-server" Feb 17 22:23:29 crc kubenswrapper[4793]: E0217 22:23:29.531547 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="extract-utilities" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.531554 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="extract-utilities" Feb 17 22:23:29 crc kubenswrapper[4793]: E0217 22:23:29.531575 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="extract-content" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.531581 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="extract-content" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.531785 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4d3e8b-8f53-40cb-a781-2973d93b6ef3" containerName="registry-server" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.533242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.560544 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4sw4"] Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.682782 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpczp\" (UniqueName: \"kubernetes.io/projected/6844504a-7138-4386-9253-4d16378374f3-kube-api-access-jpczp\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.683014 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-catalog-content\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.683352 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-utilities\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.785000 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpczp\" (UniqueName: \"kubernetes.io/projected/6844504a-7138-4386-9253-4d16378374f3-kube-api-access-jpczp\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.785071 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-catalog-content\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.785124 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-utilities\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.785651 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-utilities\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.786052 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-catalog-content\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.807150 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpczp\" (UniqueName: \"kubernetes.io/projected/6844504a-7138-4386-9253-4d16378374f3-kube-api-access-jpczp\") pod \"certified-operators-s4sw4\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:29 crc kubenswrapper[4793]: I0217 22:23:29.862031 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:30 crc kubenswrapper[4793]: I0217 22:23:30.443593 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4sw4"] Feb 17 22:23:30 crc kubenswrapper[4793]: W0217 22:23:30.452016 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6844504a_7138_4386_9253_4d16378374f3.slice/crio-b97de1932660ffe63bd7681efec1ca29f5536e646c6d16ed2b1dc33cddf624c2 WatchSource:0}: Error finding container b97de1932660ffe63bd7681efec1ca29f5536e646c6d16ed2b1dc33cddf624c2: Status 404 returned error can't find the container with id b97de1932660ffe63bd7681efec1ca29f5536e646c6d16ed2b1dc33cddf624c2 Feb 17 22:23:30 crc kubenswrapper[4793]: I0217 22:23:30.843499 4793 generic.go:334] "Generic (PLEG): container finished" podID="6844504a-7138-4386-9253-4d16378374f3" containerID="8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb" exitCode=0 Feb 17 22:23:30 crc kubenswrapper[4793]: I0217 22:23:30.843584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerDied","Data":"8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb"} Feb 17 22:23:30 crc kubenswrapper[4793]: I0217 22:23:30.843998 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerStarted","Data":"b97de1932660ffe63bd7681efec1ca29f5536e646c6d16ed2b1dc33cddf624c2"} Feb 17 22:23:30 crc kubenswrapper[4793]: I0217 22:23:30.850289 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:23:31 crc kubenswrapper[4793]: I0217 22:23:31.856636 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerStarted","Data":"d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d"} Feb 17 22:23:32 crc kubenswrapper[4793]: I0217 22:23:32.868349 4793 generic.go:334] "Generic (PLEG): container finished" podID="6844504a-7138-4386-9253-4d16378374f3" containerID="d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d" exitCode=0 Feb 17 22:23:32 crc kubenswrapper[4793]: I0217 22:23:32.868417 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerDied","Data":"d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d"} Feb 17 22:23:33 crc kubenswrapper[4793]: I0217 22:23:33.881871 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerStarted","Data":"e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d"} Feb 17 22:23:33 crc kubenswrapper[4793]: I0217 22:23:33.905602 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s4sw4" podStartSLOduration=2.511465673 podStartE2EDuration="4.905575427s" podCreationTimestamp="2026-02-17 22:23:29 +0000 UTC" firstStartedPulling="2026-02-17 22:23:30.850088701 +0000 UTC m=+8086.141787012" lastFinishedPulling="2026-02-17 22:23:33.244198415 +0000 UTC m=+8088.535896766" observedRunningTime="2026-02-17 22:23:33.902655685 +0000 UTC m=+8089.194354006" watchObservedRunningTime="2026-02-17 22:23:33.905575427 +0000 UTC m=+8089.197273768" Feb 17 22:23:38 crc kubenswrapper[4793]: I0217 22:23:38.538757 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:23:38 crc kubenswrapper[4793]: E0217 22:23:38.539539 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:23:39 crc kubenswrapper[4793]: I0217 22:23:39.862765 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:39 crc kubenswrapper[4793]: I0217 22:23:39.863166 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:39 crc kubenswrapper[4793]: I0217 22:23:39.939940 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:40 crc kubenswrapper[4793]: I0217 22:23:40.013534 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:40 crc kubenswrapper[4793]: I0217 22:23:40.188904 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4sw4"] Feb 17 22:23:41 crc kubenswrapper[4793]: I0217 22:23:41.965341 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s4sw4" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="registry-server" containerID="cri-o://e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d" gracePeriod=2 Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.437673 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.563908 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-catalog-content\") pod \"6844504a-7138-4386-9253-4d16378374f3\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.563978 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpczp\" (UniqueName: \"kubernetes.io/projected/6844504a-7138-4386-9253-4d16378374f3-kube-api-access-jpczp\") pod \"6844504a-7138-4386-9253-4d16378374f3\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.564104 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-utilities\") pod \"6844504a-7138-4386-9253-4d16378374f3\" (UID: \"6844504a-7138-4386-9253-4d16378374f3\") " Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.565236 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-utilities" (OuterVolumeSpecName: "utilities") pod "6844504a-7138-4386-9253-4d16378374f3" (UID: "6844504a-7138-4386-9253-4d16378374f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.565959 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.572097 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6844504a-7138-4386-9253-4d16378374f3-kube-api-access-jpczp" (OuterVolumeSpecName: "kube-api-access-jpczp") pod "6844504a-7138-4386-9253-4d16378374f3" (UID: "6844504a-7138-4386-9253-4d16378374f3"). InnerVolumeSpecName "kube-api-access-jpczp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.621709 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6844504a-7138-4386-9253-4d16378374f3" (UID: "6844504a-7138-4386-9253-4d16378374f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.668458 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6844504a-7138-4386-9253-4d16378374f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.668817 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpczp\" (UniqueName: \"kubernetes.io/projected/6844504a-7138-4386-9253-4d16378374f3-kube-api-access-jpczp\") on node \"crc\" DevicePath \"\"" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.976344 4793 generic.go:334] "Generic (PLEG): container finished" podID="6844504a-7138-4386-9253-4d16378374f3" containerID="e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d" exitCode=0 Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.976394 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerDied","Data":"e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d"} Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.976427 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4sw4" event={"ID":"6844504a-7138-4386-9253-4d16378374f3","Type":"ContainerDied","Data":"b97de1932660ffe63bd7681efec1ca29f5536e646c6d16ed2b1dc33cddf624c2"} Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.976423 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4sw4" Feb 17 22:23:42 crc kubenswrapper[4793]: I0217 22:23:42.976484 4793 scope.go:117] "RemoveContainer" containerID="e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.022948 4793 scope.go:117] "RemoveContainer" containerID="d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.027960 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4sw4"] Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.038538 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s4sw4"] Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.051166 4793 scope.go:117] "RemoveContainer" containerID="8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.092401 4793 scope.go:117] "RemoveContainer" containerID="e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d" Feb 17 22:23:43 crc kubenswrapper[4793]: E0217 22:23:43.093386 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d\": container with ID starting with e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d not found: ID does not exist" containerID="e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.093430 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d"} err="failed to get container status \"e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d\": rpc error: code = NotFound desc = could not find container \"e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d\": container with ID starting with e261d3ed6099b947b3ff2c6dd94ed182607c4a64018ed24603819bd21264861d not found: ID does not exist" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.093458 4793 scope.go:117] "RemoveContainer" containerID="d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d" Feb 17 22:23:43 crc kubenswrapper[4793]: E0217 22:23:43.093925 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d\": container with ID starting with d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d not found: ID does not exist" containerID="d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.093952 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d"} err="failed to get container status \"d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d\": rpc error: code = NotFound desc = could not find container \"d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d\": container with ID starting with d3fa1c7a7af9084437b80437d488d546200f30e819048afb4260bfbf6a35778d not found: ID does not exist" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.093970 4793 scope.go:117] "RemoveContainer" containerID="8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb" Feb 17 22:23:43 crc kubenswrapper[4793]: E0217 22:23:43.094237 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb\": container with ID starting with 8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb not found: ID does not exist" containerID="8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.094260 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb"} err="failed to get container status \"8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb\": rpc error: code = NotFound desc = could not find container \"8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb\": container with ID starting with 8279474eafd8d7bcfce86bfb5cac617669a612a3b8c90e591da7d502a252c7fb not found: ID does not exist" Feb 17 22:23:43 crc kubenswrapper[4793]: I0217 22:23:43.552300 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6844504a-7138-4386-9253-4d16378374f3" path="/var/lib/kubelet/pods/6844504a-7138-4386-9253-4d16378374f3/volumes" Feb 17 22:23:50 crc kubenswrapper[4793]: I0217 22:23:50.539783 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:23:50 crc kubenswrapper[4793]: E0217 22:23:50.540985 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:24:04 crc kubenswrapper[4793]: I0217 22:24:04.539047 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:24:04 crc kubenswrapper[4793]: E0217 22:24:04.540086 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:24:18 crc kubenswrapper[4793]: I0217 22:24:18.539210 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:24:18 crc kubenswrapper[4793]: E0217 22:24:18.540416 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:24:20 crc kubenswrapper[4793]: I0217 22:24:20.102179 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:24:20 crc kubenswrapper[4793]: I0217 22:24:20.102558 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:24:29 crc kubenswrapper[4793]: I0217 22:24:29.539055 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:24:29 crc kubenswrapper[4793]: E0217 22:24:29.539841 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:24:41 crc kubenswrapper[4793]: I0217 22:24:41.538903 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:24:41 crc kubenswrapper[4793]: E0217 22:24:41.539793 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:24:50 crc kubenswrapper[4793]: I0217 22:24:50.101972 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:24:50 crc kubenswrapper[4793]: I0217 22:24:50.102647 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:24:56 crc kubenswrapper[4793]: I0217 22:24:56.539879 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:24:56 crc kubenswrapper[4793]: E0217 22:24:56.541053 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:25:08 crc kubenswrapper[4793]: I0217 22:25:08.540120 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:25:08 crc kubenswrapper[4793]: E0217 22:25:08.541316 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.087924 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rtt75"] Feb 17 22:25:16 crc kubenswrapper[4793]: E0217 22:25:16.088921 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="extract-content" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.088939 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="extract-content" Feb 17 22:25:16 crc kubenswrapper[4793]: E0217 22:25:16.088975 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="registry-server" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.088985 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="registry-server" Feb 17 22:25:16 crc kubenswrapper[4793]: E0217 22:25:16.089003 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="extract-utilities" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.089014 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="extract-utilities" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.089283 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="6844504a-7138-4386-9253-4d16378374f3" containerName="registry-server" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.091012 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.114091 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtt75"] Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.171684 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-utilities\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.171825 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-catalog-content\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.171881 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gxst\" (UniqueName: \"kubernetes.io/projected/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-kube-api-access-8gxst\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.273230 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-utilities\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.273333 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-catalog-content\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.273385 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gxst\" (UniqueName: \"kubernetes.io/projected/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-kube-api-access-8gxst\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.274274 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-utilities\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.274551 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-catalog-content\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.292816 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gxst\" (UniqueName: \"kubernetes.io/projected/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-kube-api-access-8gxst\") pod \"redhat-operators-rtt75\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.476045 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:16 crc kubenswrapper[4793]: I0217 22:25:16.970345 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtt75"] Feb 17 22:25:17 crc kubenswrapper[4793]: I0217 22:25:17.101263 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerStarted","Data":"aa7ef3c2f05c75c6681759b20ffd06dbfddd89178c867992b00587ab891108a8"} Feb 17 22:25:18 crc kubenswrapper[4793]: I0217 22:25:18.115127 4793 generic.go:334] "Generic (PLEG): container finished" podID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerID="0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15" exitCode=0 Feb 17 22:25:18 crc kubenswrapper[4793]: I0217 22:25:18.115178 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerDied","Data":"0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15"} Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.102312 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.102904 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.102952 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.103559 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4981ab52d850aaca9dc626601ec76dbebfd4955c3e8a2001dcd9e38e1c0e1fb"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.103611 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://f4981ab52d850aaca9dc626601ec76dbebfd4955c3e8a2001dcd9e38e1c0e1fb" gracePeriod=600 Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.134931 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerStarted","Data":"730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b"} Feb 17 22:25:20 crc kubenswrapper[4793]: I0217 22:25:20.539177 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:25:20 crc kubenswrapper[4793]: E0217 22:25:20.539654 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:25:22 crc kubenswrapper[4793]: I0217 22:25:22.162580 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="f4981ab52d850aaca9dc626601ec76dbebfd4955c3e8a2001dcd9e38e1c0e1fb" exitCode=0 Feb 17 22:25:22 crc kubenswrapper[4793]: I0217 22:25:22.162990 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"f4981ab52d850aaca9dc626601ec76dbebfd4955c3e8a2001dcd9e38e1c0e1fb"} Feb 17 22:25:22 crc kubenswrapper[4793]: I0217 22:25:22.163028 4793 scope.go:117] "RemoveContainer" containerID="e293a9c5da63ce3cfe5e784a5140977ecbb1e1291736b428eb806176b0fbe40f" Feb 17 22:25:22 crc kubenswrapper[4793]: I0217 22:25:22.168677 4793 generic.go:334] "Generic (PLEG): container finished" podID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerID="730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b" exitCode=0 Feb 17 22:25:22 crc kubenswrapper[4793]: I0217 22:25:22.168749 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerDied","Data":"730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b"} Feb 17 22:25:23 crc kubenswrapper[4793]: I0217 22:25:23.184017 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65"} Feb 17 22:25:23 crc kubenswrapper[4793]: I0217 22:25:23.189895 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerStarted","Data":"c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5"} Feb 17 22:25:23 crc kubenswrapper[4793]: I0217 22:25:23.233356 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rtt75" podStartSLOduration=2.78741475 podStartE2EDuration="7.233329408s" podCreationTimestamp="2026-02-17 22:25:16 +0000 UTC" firstStartedPulling="2026-02-17 22:25:18.11796024 +0000 UTC m=+8193.409658601" lastFinishedPulling="2026-02-17 22:25:22.563874948 +0000 UTC m=+8197.855573259" observedRunningTime="2026-02-17 22:25:23.222718066 +0000 UTC m=+8198.514416377" watchObservedRunningTime="2026-02-17 22:25:23.233329408 +0000 UTC m=+8198.525027719" Feb 17 22:25:26 crc kubenswrapper[4793]: I0217 22:25:26.476338 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:26 crc kubenswrapper[4793]: I0217 22:25:26.476955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:27 crc kubenswrapper[4793]: I0217 22:25:27.549619 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtt75" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="registry-server" probeResult="failure" output=< Feb 17 22:25:27 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 22:25:27 crc kubenswrapper[4793]: > Feb 17 22:25:33 crc kubenswrapper[4793]: I0217 22:25:33.539170 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:25:33 crc kubenswrapper[4793]: E0217 22:25:33.540092 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:25:36 crc kubenswrapper[4793]: I0217 22:25:36.539809 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:36 crc kubenswrapper[4793]: I0217 22:25:36.590740 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:36 crc kubenswrapper[4793]: I0217 22:25:36.786460 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtt75"] Feb 17 22:25:38 crc kubenswrapper[4793]: I0217 22:25:38.404650 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rtt75" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="registry-server" containerID="cri-o://c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5" gracePeriod=2 Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.003262 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.060626 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gxst\" (UniqueName: \"kubernetes.io/projected/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-kube-api-access-8gxst\") pod \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.060738 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-utilities\") pod \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.060806 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-catalog-content\") pod \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\" (UID: \"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8\") " Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.061904 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-utilities" (OuterVolumeSpecName: "utilities") pod "eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" (UID: "eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.073136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-kube-api-access-8gxst" (OuterVolumeSpecName: "kube-api-access-8gxst") pod "eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" (UID: "eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8"). InnerVolumeSpecName "kube-api-access-8gxst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.162911 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gxst\" (UniqueName: \"kubernetes.io/projected/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-kube-api-access-8gxst\") on node \"crc\" DevicePath \"\"" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.162961 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.198779 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" (UID: "eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.266090 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.421501 4793 generic.go:334] "Generic (PLEG): container finished" podID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerID="c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5" exitCode=0 Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.421554 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerDied","Data":"c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5"} Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.421610 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtt75" event={"ID":"eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8","Type":"ContainerDied","Data":"aa7ef3c2f05c75c6681759b20ffd06dbfddd89178c867992b00587ab891108a8"} Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.421606 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtt75" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.421651 4793 scope.go:117] "RemoveContainer" containerID="c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.456188 4793 scope.go:117] "RemoveContainer" containerID="730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.480476 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtt75"] Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.494991 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rtt75"] Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.503206 4793 scope.go:117] "RemoveContainer" containerID="0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.550802 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" path="/var/lib/kubelet/pods/eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8/volumes" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.587509 4793 scope.go:117] "RemoveContainer" containerID="c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5" Feb 17 22:25:39 crc kubenswrapper[4793]: E0217 22:25:39.588182 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5\": container with ID starting with c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5 not found: ID does not exist" containerID="c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.588211 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5"} err="failed to get container status \"c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5\": rpc error: code = NotFound desc = could not find container \"c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5\": container with ID starting with c70a01a10e98c3b7ff67c89a00d623a7b1a267c1255dfcd2461dca8709424fb5 not found: ID does not exist" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.588230 4793 scope.go:117] "RemoveContainer" containerID="730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b" Feb 17 22:25:39 crc kubenswrapper[4793]: E0217 22:25:39.589127 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b\": container with ID starting with 730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b not found: ID does not exist" containerID="730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.589168 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b"} err="failed to get container status \"730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b\": rpc error: code = NotFound desc = could not find container \"730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b\": container with ID starting with 730de286a19d6425024bf6e16e1dd4feeccd4c200ac8757c2a1a59ae786c8c2b not found: ID does not exist" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.589203 4793 scope.go:117] "RemoveContainer" containerID="0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15" Feb 17 22:25:39 crc kubenswrapper[4793]: E0217 22:25:39.589723 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15\": container with ID starting with 0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15 not found: ID does not exist" containerID="0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15" Feb 17 22:25:39 crc kubenswrapper[4793]: I0217 22:25:39.589744 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15"} err="failed to get container status \"0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15\": rpc error: code = NotFound desc = could not find container \"0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15\": container with ID starting with 0454e6fe46834959bb468cdf6893889c287b9e4a630ed817a3178d0938ab7e15 not found: ID does not exist" Feb 17 22:25:47 crc kubenswrapper[4793]: I0217 22:25:47.538598 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:25:47 crc kubenswrapper[4793]: E0217 22:25:47.539793 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:26:00 crc kubenswrapper[4793]: I0217 22:26:00.539360 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:26:00 crc kubenswrapper[4793]: E0217 22:26:00.540234 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:26:13 crc kubenswrapper[4793]: I0217 22:26:13.538614 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:26:13 crc kubenswrapper[4793]: E0217 22:26:13.539592 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:26:27 crc kubenswrapper[4793]: I0217 22:26:27.539442 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:26:27 crc kubenswrapper[4793]: E0217 22:26:27.540251 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:26:38 crc kubenswrapper[4793]: I0217 22:26:38.539484 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:26:38 crc kubenswrapper[4793]: E0217 22:26:38.540749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:26:51 crc kubenswrapper[4793]: I0217 22:26:51.538788 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:26:51 crc kubenswrapper[4793]: E0217 22:26:51.539568 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:04 crc kubenswrapper[4793]: I0217 22:27:04.538824 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:27:05 crc kubenswrapper[4793]: I0217 22:27:05.535899 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063"} Feb 17 22:27:05 crc kubenswrapper[4793]: I0217 22:27:05.596321 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:27:05 crc kubenswrapper[4793]: I0217 22:27:05.596391 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:27:05 crc kubenswrapper[4793]: I0217 22:27:05.651897 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 17 22:27:06 crc kubenswrapper[4793]: I0217 22:27:06.600271 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 17 22:27:07 crc kubenswrapper[4793]: I0217 22:27:07.605216 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" exitCode=1 Feb 17 22:27:07 crc kubenswrapper[4793]: I0217 22:27:07.605584 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063"} Feb 17 22:27:07 crc kubenswrapper[4793]: I0217 22:27:07.605625 4793 scope.go:117] "RemoveContainer" containerID="dfdb0edbf6eb5766ad38a726016274395d778c03b670fe265e1537747f96b08d" Feb 17 22:27:07 crc kubenswrapper[4793]: I0217 22:27:07.606608 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:07 crc kubenswrapper[4793]: E0217 22:27:07.606903 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:08 crc kubenswrapper[4793]: I0217 22:27:08.623352 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:08 crc kubenswrapper[4793]: E0217 22:27:08.623814 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:10 crc kubenswrapper[4793]: I0217 22:27:10.595801 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:27:10 crc kubenswrapper[4793]: I0217 22:27:10.597333 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:10 crc kubenswrapper[4793]: E0217 22:27:10.598413 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:15 crc kubenswrapper[4793]: I0217 22:27:15.596130 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:27:15 crc kubenswrapper[4793]: I0217 22:27:15.597460 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:27:15 crc kubenswrapper[4793]: I0217 22:27:15.598259 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:15 crc kubenswrapper[4793]: E0217 22:27:15.599027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:26 crc kubenswrapper[4793]: I0217 22:27:26.539033 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:26 crc kubenswrapper[4793]: E0217 22:27:26.540243 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:40 crc kubenswrapper[4793]: I0217 22:27:40.539701 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:40 crc kubenswrapper[4793]: E0217 22:27:40.540343 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:27:44 crc kubenswrapper[4793]: I0217 22:27:44.849646 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 22:27:44 crc kubenswrapper[4793]: I0217 22:27:44.850794 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="prometheus" containerID="cri-o://d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58" gracePeriod=600 Feb 17 22:27:44 crc kubenswrapper[4793]: I0217 22:27:44.851278 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="config-reloader" containerID="cri-o://b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3" gracePeriod=600 Feb 17 22:27:44 crc kubenswrapper[4793]: I0217 22:27:44.851302 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="thanos-sidecar" containerID="cri-o://36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688" gracePeriod=600 Feb 17 22:27:44 crc kubenswrapper[4793]: I0217 22:27:44.940794 4793 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.137:9090/-/ready\": read tcp 10.217.0.2:47122->10.217.0.137:9090: read: connection reset by peer" Feb 17 22:27:45 crc kubenswrapper[4793]: I0217 22:27:45.076679 4793 generic.go:334] "Generic (PLEG): container finished" podID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerID="36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688" exitCode=0 Feb 17 22:27:45 crc kubenswrapper[4793]: I0217 22:27:45.076723 4793 generic.go:334] "Generic (PLEG): container finished" podID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerID="d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58" exitCode=0 Feb 17 22:27:45 crc kubenswrapper[4793]: I0217 22:27:45.076743 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerDied","Data":"36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688"} Feb 17 22:27:45 crc kubenswrapper[4793]: I0217 22:27:45.076769 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerDied","Data":"d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58"} Feb 17 22:27:45 crc kubenswrapper[4793]: I0217 22:27:45.895197 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.034524 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-tls-assets\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.034561 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-thanos-prometheus-http-client-file\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.034624 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.034679 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.035323 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-2\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.035894 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.035920 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036032 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036137 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-0\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036253 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-1\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036302 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hs6m\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-kube-api-access-5hs6m\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036330 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f41a37ae-4155-4b06-ad0b-46cfe53de634-config-out\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036352 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-secret-combined-ca-bundle\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036421 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-config\") pod \"f41a37ae-4155-4b06-ad0b-46cfe53de634\" (UID: \"f41a37ae-4155-4b06-ad0b-46cfe53de634\") " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036662 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.036838 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.037486 4793 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.037513 4793 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.037527 4793 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f41a37ae-4155-4b06-ad0b-46cfe53de634-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.041924 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.043473 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-config" (OuterVolumeSpecName: "config") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.043509 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.043704 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.044009 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.044090 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.058544 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41a37ae-4155-4b06-ad0b-46cfe53de634-config-out" (OuterVolumeSpecName: "config-out") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.077924 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-kube-api-access-5hs6m" (OuterVolumeSpecName: "kube-api-access-5hs6m") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "kube-api-access-5hs6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.078135 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "pvc-e06ea14b-3004-4867-b11f-b167457cc525". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.097411 4793 generic.go:334] "Generic (PLEG): container finished" podID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerID="b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3" exitCode=0 Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.097451 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerDied","Data":"b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3"} Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.097477 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f41a37ae-4155-4b06-ad0b-46cfe53de634","Type":"ContainerDied","Data":"aedbc5bc83a6abb4fdcc78c65427fca2e61ce0b6eb24c0eb46ca3a70e60e5ec4"} Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.097493 4793 scope.go:117] "RemoveContainer" containerID="36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.097611 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.142355 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config" (OuterVolumeSpecName: "web-config") pod "f41a37ae-4155-4b06-ad0b-46cfe53de634" (UID: "f41a37ae-4155-4b06-ad0b-46cfe53de634"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143065 4793 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143175 4793 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143274 4793 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143370 4793 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143474 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") on node \"crc\" " Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143566 4793 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143647 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hs6m\" (UniqueName: \"kubernetes.io/projected/f41a37ae-4155-4b06-ad0b-46cfe53de634-kube-api-access-5hs6m\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143747 4793 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f41a37ae-4155-4b06-ad0b-46cfe53de634-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143850 4793 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.143934 4793 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f41a37ae-4155-4b06-ad0b-46cfe53de634-config\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.180966 4793 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.181752 4793 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e06ea14b-3004-4867-b11f-b167457cc525" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525") on node "crc" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.201583 4793 scope.go:117] "RemoveContainer" containerID="b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.227444 4793 scope.go:117] "RemoveContainer" containerID="d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.246007 4793 reconciler_common.go:293] "Volume detached for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") on node \"crc\" DevicePath \"\"" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.252533 4793 scope.go:117] "RemoveContainer" containerID="1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.294164 4793 scope.go:117] "RemoveContainer" containerID="36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.294711 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688\": container with ID starting with 36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688 not found: ID does not exist" containerID="36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.294766 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688"} err="failed to get container status \"36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688\": rpc error: code = NotFound desc = could not find container \"36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688\": container with ID starting with 36b635d0e7aef42543c4d3c3bc97321b326e5bacc195f1823a591c65d32ec688 not found: ID does not exist" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.294797 4793 scope.go:117] "RemoveContainer" containerID="b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.295170 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3\": container with ID starting with b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3 not found: ID does not exist" containerID="b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.295212 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3"} err="failed to get container status \"b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3\": rpc error: code = NotFound desc = could not find container \"b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3\": container with ID starting with b2feaae4275e2e0d141351268d38c63c2709e4516883b8ed35f4ba14e21b7eb3 not found: ID does not exist" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.295236 4793 scope.go:117] "RemoveContainer" containerID="d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.295735 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58\": container with ID starting with d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58 not found: ID does not exist" containerID="d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.295768 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58"} err="failed to get container status \"d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58\": rpc error: code = NotFound desc = could not find container \"d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58\": container with ID starting with d0268907a70a8bad9d57750f7d0ad47eebe8a552f0a8c4bd7cfe1d692c245b58 not found: ID does not exist" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.295796 4793 scope.go:117] "RemoveContainer" containerID="1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.296045 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03\": container with ID starting with 1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03 not found: ID does not exist" containerID="1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.296065 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03"} err="failed to get container status \"1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03\": rpc error: code = NotFound desc = could not find container \"1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03\": container with ID starting with 1a3bfe788a89854a7a9ae225b19e70a66cdb4ed467d895eb9dc3143345c97b03 not found: ID does not exist" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.450256 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.458849 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.475480 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.475874 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="registry-server" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.475891 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="registry-server" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.475905 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="extract-content" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.475912 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="extract-content" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.475944 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="config-reloader" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.475950 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="config-reloader" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.475965 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="init-config-reloader" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.475971 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="init-config-reloader" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.475981 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="thanos-sidecar" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.475987 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="thanos-sidecar" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.476010 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="extract-utilities" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.476017 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="extract-utilities" Feb 17 22:27:46 crc kubenswrapper[4793]: E0217 22:27:46.476027 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="prometheus" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.476034 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="prometheus" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.476210 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="prometheus" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.476225 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="config-reloader" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.476243 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4a5e08-a87c-42e2-9b6f-5da804bbe5e8" containerName="registry-server" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.476255 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" containerName="thanos-sidecar" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.481242 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.485422 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.485572 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8pbn7" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.485966 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.489928 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.490161 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.490284 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.490390 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.495826 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.498159 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552291 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552667 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552720 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nthwt\" (UniqueName: \"kubernetes.io/projected/3a335ec7-14e6-40aa-8dfd-56687eed9b84-kube-api-access-nthwt\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552763 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552786 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552879 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552900 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a335ec7-14e6-40aa-8dfd-56687eed9b84-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552924 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a335ec7-14e6-40aa-8dfd-56687eed9b84-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552948 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552969 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.552987 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.553017 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.553052 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655027 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655313 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655387 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655431 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nthwt\" (UniqueName: \"kubernetes.io/projected/3a335ec7-14e6-40aa-8dfd-56687eed9b84-kube-api-access-nthwt\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655506 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655548 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655595 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655629 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a335ec7-14e6-40aa-8dfd-56687eed9b84-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655667 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a335ec7-14e6-40aa-8dfd-56687eed9b84-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655768 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655835 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.655903 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.656959 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.657130 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.657799 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3a335ec7-14e6-40aa-8dfd-56687eed9b84-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.661115 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3a335ec7-14e6-40aa-8dfd-56687eed9b84-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.661171 4793 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.661197 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3174b7bcfd494c95de787fa7079d37ded4941cf895f579caff106d0384cba7de/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.662786 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.662855 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.663311 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-config\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.663843 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3a335ec7-14e6-40aa-8dfd-56687eed9b84-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.663898 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.672071 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.672086 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3a335ec7-14e6-40aa-8dfd-56687eed9b84-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.682987 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nthwt\" (UniqueName: \"kubernetes.io/projected/3a335ec7-14e6-40aa-8dfd-56687eed9b84-kube-api-access-nthwt\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.731210 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e06ea14b-3004-4867-b11f-b167457cc525\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e06ea14b-3004-4867-b11f-b167457cc525\") pod \"prometheus-metric-storage-0\" (UID: \"3a335ec7-14e6-40aa-8dfd-56687eed9b84\") " pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:46 crc kubenswrapper[4793]: I0217 22:27:46.797230 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 22:27:47 crc kubenswrapper[4793]: I0217 22:27:47.298348 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 22:27:47 crc kubenswrapper[4793]: I0217 22:27:47.551615 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41a37ae-4155-4b06-ad0b-46cfe53de634" path="/var/lib/kubelet/pods/f41a37ae-4155-4b06-ad0b-46cfe53de634/volumes" Feb 17 22:27:48 crc kubenswrapper[4793]: I0217 22:27:48.125720 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a335ec7-14e6-40aa-8dfd-56687eed9b84","Type":"ContainerStarted","Data":"15ba92cadac4598c8f2bf14b78fe03ed9e103034758c7e8239683c2da31b08ae"} Feb 17 22:27:50 crc kubenswrapper[4793]: I0217 22:27:50.101823 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:27:50 crc kubenswrapper[4793]: I0217 22:27:50.102207 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:27:52 crc kubenswrapper[4793]: I0217 22:27:52.171006 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a335ec7-14e6-40aa-8dfd-56687eed9b84","Type":"ContainerStarted","Data":"3b40824d19a5fae73634c949508711138cd50b9fcb2f1bbe2ea1e3f32bf6c6d1"} Feb 17 22:27:52 crc kubenswrapper[4793]: I0217 22:27:52.540021 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:27:52 crc kubenswrapper[4793]: E0217 22:27:52.540317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:28:00 crc kubenswrapper[4793]: I0217 22:28:00.266967 4793 generic.go:334] "Generic (PLEG): container finished" podID="3a335ec7-14e6-40aa-8dfd-56687eed9b84" containerID="3b40824d19a5fae73634c949508711138cd50b9fcb2f1bbe2ea1e3f32bf6c6d1" exitCode=0 Feb 17 22:28:00 crc kubenswrapper[4793]: I0217 22:28:00.267077 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a335ec7-14e6-40aa-8dfd-56687eed9b84","Type":"ContainerDied","Data":"3b40824d19a5fae73634c949508711138cd50b9fcb2f1bbe2ea1e3f32bf6c6d1"} Feb 17 22:28:01 crc kubenswrapper[4793]: I0217 22:28:01.283255 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a335ec7-14e6-40aa-8dfd-56687eed9b84","Type":"ContainerStarted","Data":"8d7dc2619c60b7ba354cdc025afcc4ecac0536b79428bd38866d3850675ff4bd"} Feb 17 22:28:03 crc kubenswrapper[4793]: I0217 22:28:03.546511 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:28:03 crc kubenswrapper[4793]: E0217 22:28:03.547577 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:28:05 crc kubenswrapper[4793]: I0217 22:28:05.333630 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a335ec7-14e6-40aa-8dfd-56687eed9b84","Type":"ContainerStarted","Data":"f04edc23a2c1cd1b3e674ac16bacd216927361fa211ef16111e5b00b44b92368"} Feb 17 22:28:06 crc kubenswrapper[4793]: I0217 22:28:06.349759 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3a335ec7-14e6-40aa-8dfd-56687eed9b84","Type":"ContainerStarted","Data":"fa16e8cceaa67f80195acecb2763cbf3c77d74c89ec90f71a2666ae2787d468f"} Feb 17 22:28:06 crc kubenswrapper[4793]: I0217 22:28:06.407837 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.407806969 podStartE2EDuration="20.407806969s" podCreationTimestamp="2026-02-17 22:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 22:28:06.39201295 +0000 UTC m=+8361.683711301" watchObservedRunningTime="2026-02-17 22:28:06.407806969 +0000 UTC m=+8361.699505320" Feb 17 22:28:06 crc kubenswrapper[4793]: I0217 22:28:06.798150 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 22:28:16 crc kubenswrapper[4793]: I0217 22:28:16.539998 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:28:16 crc kubenswrapper[4793]: E0217 22:28:16.541304 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:28:16 crc kubenswrapper[4793]: I0217 22:28:16.797960 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 22:28:16 crc kubenswrapper[4793]: I0217 22:28:16.824049 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 22:28:17 crc kubenswrapper[4793]: I0217 22:28:17.515282 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 22:28:20 crc kubenswrapper[4793]: I0217 22:28:20.101794 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:28:20 crc kubenswrapper[4793]: I0217 22:28:20.102197 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:28:28 crc kubenswrapper[4793]: I0217 22:28:28.539345 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:28:28 crc kubenswrapper[4793]: E0217 22:28:28.540272 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.048983 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.050859 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.053795 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gsqr7" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.053954 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.054199 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.057980 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.105818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.105863 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.105889 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-config-data\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106050 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4md\" (UniqueName: \"kubernetes.io/projected/75a21cff-8e4b-4844-8717-b4f483fa282b-kube-api-access-bf4md\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106078 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106104 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106502 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106571 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106620 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.106723 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208196 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208248 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208265 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208648 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208704 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-config-data\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208746 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4md\" (UniqueName: \"kubernetes.io/projected/75a21cff-8e4b-4844-8717-b4f483fa282b-kube-api-access-bf4md\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208772 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208795 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.208890 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.209299 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.209397 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.209656 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.210110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-config-data\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.219070 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.228101 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.228443 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.238421 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4md\" (UniqueName: \"kubernetes.io/projected/75a21cff-8e4b-4844-8717-b4f483fa282b-kube-api-access-bf4md\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.276203 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.368949 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.824910 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 17 22:28:39 crc kubenswrapper[4793]: I0217 22:28:39.828928 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:28:40 crc kubenswrapper[4793]: I0217 22:28:40.799160 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75a21cff-8e4b-4844-8717-b4f483fa282b","Type":"ContainerStarted","Data":"24b2059af5c998ded8fb1a7e4fb8d862361f433ff118dbff602035f53ce60a68"} Feb 17 22:28:41 crc kubenswrapper[4793]: I0217 22:28:41.543651 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:28:41 crc kubenswrapper[4793]: E0217 22:28:41.544240 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:28:49 crc kubenswrapper[4793]: I0217 22:28:49.652258 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.102343 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.102417 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.102469 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.103184 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.103287 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" gracePeriod=600 Feb 17 22:28:50 crc kubenswrapper[4793]: E0217 22:28:50.235920 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.957482 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" exitCode=0 Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.957521 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65"} Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.958010 4793 scope.go:117] "RemoveContainer" containerID="f4981ab52d850aaca9dc626601ec76dbebfd4955c3e8a2001dcd9e38e1c0e1fb" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.958741 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:28:50 crc kubenswrapper[4793]: E0217 22:28:50.959037 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:28:50 crc kubenswrapper[4793]: I0217 22:28:50.959956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75a21cff-8e4b-4844-8717-b4f483fa282b","Type":"ContainerStarted","Data":"1f1738d6ab1b7ab24ef565bdaa13cd8e56a856baeca65fa3e237b4fe1045021a"} Feb 17 22:28:56 crc kubenswrapper[4793]: I0217 22:28:56.539901 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:28:56 crc kubenswrapper[4793]: E0217 22:28:56.541088 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:29:03 crc kubenswrapper[4793]: I0217 22:29:03.539359 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:29:03 crc kubenswrapper[4793]: E0217 22:29:03.540278 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:29:07 crc kubenswrapper[4793]: I0217 22:29:07.538397 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:29:07 crc kubenswrapper[4793]: E0217 22:29:07.539220 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:29:18 crc kubenswrapper[4793]: I0217 22:29:18.540011 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:29:18 crc kubenswrapper[4793]: I0217 22:29:18.542371 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:29:18 crc kubenswrapper[4793]: E0217 22:29:18.542738 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:29:18 crc kubenswrapper[4793]: E0217 22:29:18.543343 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:29:31 crc kubenswrapper[4793]: I0217 22:29:31.539722 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:29:31 crc kubenswrapper[4793]: E0217 22:29:31.540735 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:29:31 crc kubenswrapper[4793]: I0217 22:29:31.541210 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:29:31 crc kubenswrapper[4793]: E0217 22:29:31.541753 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.703248 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=45.88863764 podStartE2EDuration="55.703222084s" podCreationTimestamp="2026-02-17 22:28:38 +0000 UTC" firstStartedPulling="2026-02-17 22:28:39.828738998 +0000 UTC m=+8395.120437309" lastFinishedPulling="2026-02-17 22:28:49.643323442 +0000 UTC m=+8404.935021753" observedRunningTime="2026-02-17 22:28:51.022575317 +0000 UTC m=+8406.314273628" watchObservedRunningTime="2026-02-17 22:29:33.703222084 +0000 UTC m=+8448.994920415" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.719064 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srnnv"] Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.723433 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.745357 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srnnv"] Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.780178 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-catalog-content\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.780438 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7p7h\" (UniqueName: \"kubernetes.io/projected/44489fbb-0f72-46da-9021-ed6049554f9a-kube-api-access-b7p7h\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.780535 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-utilities\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.881946 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-catalog-content\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.882017 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7p7h\" (UniqueName: \"kubernetes.io/projected/44489fbb-0f72-46da-9021-ed6049554f9a-kube-api-access-b7p7h\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.882041 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-utilities\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.882552 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-utilities\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.882753 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-catalog-content\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:33 crc kubenswrapper[4793]: I0217 22:29:33.907042 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7p7h\" (UniqueName: \"kubernetes.io/projected/44489fbb-0f72-46da-9021-ed6049554f9a-kube-api-access-b7p7h\") pod \"redhat-marketplace-srnnv\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:34 crc kubenswrapper[4793]: I0217 22:29:34.059856 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:34 crc kubenswrapper[4793]: I0217 22:29:34.410829 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srnnv"] Feb 17 22:29:35 crc kubenswrapper[4793]: I0217 22:29:35.424772 4793 generic.go:334] "Generic (PLEG): container finished" podID="44489fbb-0f72-46da-9021-ed6049554f9a" containerID="12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65" exitCode=0 Feb 17 22:29:35 crc kubenswrapper[4793]: I0217 22:29:35.424848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerDied","Data":"12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65"} Feb 17 22:29:35 crc kubenswrapper[4793]: I0217 22:29:35.425203 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerStarted","Data":"595a0e620430444ab80a1818093473366e3c4e918f18f96f0ac226a6dbbc1bb3"} Feb 17 22:29:36 crc kubenswrapper[4793]: I0217 22:29:36.436279 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerStarted","Data":"872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb"} Feb 17 22:29:37 crc kubenswrapper[4793]: I0217 22:29:37.449093 4793 generic.go:334] "Generic (PLEG): container finished" podID="44489fbb-0f72-46da-9021-ed6049554f9a" containerID="872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb" exitCode=0 Feb 17 22:29:37 crc kubenswrapper[4793]: I0217 22:29:37.449211 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerDied","Data":"872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb"} Feb 17 22:29:38 crc kubenswrapper[4793]: I0217 22:29:38.465876 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerStarted","Data":"ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3"} Feb 17 22:29:38 crc kubenswrapper[4793]: I0217 22:29:38.495373 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srnnv" podStartSLOduration=3.075548562 podStartE2EDuration="5.495351621s" podCreationTimestamp="2026-02-17 22:29:33 +0000 UTC" firstStartedPulling="2026-02-17 22:29:35.427491249 +0000 UTC m=+8450.719189570" lastFinishedPulling="2026-02-17 22:29:37.847294318 +0000 UTC m=+8453.138992629" observedRunningTime="2026-02-17 22:29:38.488544703 +0000 UTC m=+8453.780243024" watchObservedRunningTime="2026-02-17 22:29:38.495351621 +0000 UTC m=+8453.787049932" Feb 17 22:29:42 crc kubenswrapper[4793]: I0217 22:29:42.539832 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:29:42 crc kubenswrapper[4793]: E0217 22:29:42.540965 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:29:44 crc kubenswrapper[4793]: I0217 22:29:44.060119 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:44 crc kubenswrapper[4793]: I0217 22:29:44.060228 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:44 crc kubenswrapper[4793]: I0217 22:29:44.139947 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:44 crc kubenswrapper[4793]: I0217 22:29:44.538808 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:29:44 crc kubenswrapper[4793]: E0217 22:29:44.539339 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:29:44 crc kubenswrapper[4793]: I0217 22:29:44.595366 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:44 crc kubenswrapper[4793]: I0217 22:29:44.652892 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srnnv"] Feb 17 22:29:46 crc kubenswrapper[4793]: I0217 22:29:46.547914 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srnnv" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="registry-server" containerID="cri-o://ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3" gracePeriod=2 Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.093432 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.213263 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-utilities\") pod \"44489fbb-0f72-46da-9021-ed6049554f9a\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.213493 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7p7h\" (UniqueName: \"kubernetes.io/projected/44489fbb-0f72-46da-9021-ed6049554f9a-kube-api-access-b7p7h\") pod \"44489fbb-0f72-46da-9021-ed6049554f9a\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.213573 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-catalog-content\") pod \"44489fbb-0f72-46da-9021-ed6049554f9a\" (UID: \"44489fbb-0f72-46da-9021-ed6049554f9a\") " Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.214136 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-utilities" (OuterVolumeSpecName: "utilities") pod "44489fbb-0f72-46da-9021-ed6049554f9a" (UID: "44489fbb-0f72-46da-9021-ed6049554f9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.223058 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44489fbb-0f72-46da-9021-ed6049554f9a-kube-api-access-b7p7h" (OuterVolumeSpecName: "kube-api-access-b7p7h") pod "44489fbb-0f72-46da-9021-ed6049554f9a" (UID: "44489fbb-0f72-46da-9021-ed6049554f9a"). InnerVolumeSpecName "kube-api-access-b7p7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.248709 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44489fbb-0f72-46da-9021-ed6049554f9a" (UID: "44489fbb-0f72-46da-9021-ed6049554f9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.316206 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7p7h\" (UniqueName: \"kubernetes.io/projected/44489fbb-0f72-46da-9021-ed6049554f9a-kube-api-access-b7p7h\") on node \"crc\" DevicePath \"\"" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.316262 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.316289 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44489fbb-0f72-46da-9021-ed6049554f9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.558439 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srnnv" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.558480 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerDied","Data":"ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3"} Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.558529 4793 scope.go:117] "RemoveContainer" containerID="ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.560807 4793 generic.go:334] "Generic (PLEG): container finished" podID="44489fbb-0f72-46da-9021-ed6049554f9a" containerID="ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3" exitCode=0 Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.560861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srnnv" event={"ID":"44489fbb-0f72-46da-9021-ed6049554f9a","Type":"ContainerDied","Data":"595a0e620430444ab80a1818093473366e3c4e918f18f96f0ac226a6dbbc1bb3"} Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.600336 4793 scope.go:117] "RemoveContainer" containerID="872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.619927 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srnnv"] Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.681969 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srnnv"] Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.691946 4793 scope.go:117] "RemoveContainer" containerID="12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.730839 4793 scope.go:117] "RemoveContainer" containerID="ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3" Feb 17 22:29:47 crc kubenswrapper[4793]: E0217 22:29:47.738980 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3\": container with ID starting with ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3 not found: ID does not exist" containerID="ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.739027 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3"} err="failed to get container status \"ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3\": rpc error: code = NotFound desc = could not find container \"ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3\": container with ID starting with ffa948fcd06945097d20105bf21a4d1c12cc1b1c3cf89ecc73387376248fa0c3 not found: ID does not exist" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.739054 4793 scope.go:117] "RemoveContainer" containerID="872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb" Feb 17 22:29:47 crc kubenswrapper[4793]: E0217 22:29:47.740070 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb\": container with ID starting with 872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb not found: ID does not exist" containerID="872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.740121 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb"} err="failed to get container status \"872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb\": rpc error: code = NotFound desc = could not find container \"872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb\": container with ID starting with 872c7ac379ae072046cc70fe764eecbb8a6868299fb00929f6870e5925ba27cb not found: ID does not exist" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.740155 4793 scope.go:117] "RemoveContainer" containerID="12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65" Feb 17 22:29:47 crc kubenswrapper[4793]: E0217 22:29:47.740539 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65\": container with ID starting with 12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65 not found: ID does not exist" containerID="12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65" Feb 17 22:29:47 crc kubenswrapper[4793]: I0217 22:29:47.740562 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65"} err="failed to get container status \"12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65\": rpc error: code = NotFound desc = could not find container \"12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65\": container with ID starting with 12e9977953f253fb15f34da2eecf9e232f1e1350bc891410970612d66a82bd65 not found: ID does not exist" Feb 17 22:29:49 crc kubenswrapper[4793]: I0217 22:29:49.553552 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" path="/var/lib/kubelet/pods/44489fbb-0f72-46da-9021-ed6049554f9a/volumes" Feb 17 22:29:55 crc kubenswrapper[4793]: I0217 22:29:55.546676 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:29:55 crc kubenswrapper[4793]: E0217 22:29:55.547558 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:29:56 crc kubenswrapper[4793]: I0217 22:29:56.539796 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:29:56 crc kubenswrapper[4793]: E0217 22:29:56.540126 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.177257 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2"] Feb 17 22:30:00 crc kubenswrapper[4793]: E0217 22:30:00.178859 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="extract-utilities" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.178893 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="extract-utilities" Feb 17 22:30:00 crc kubenswrapper[4793]: E0217 22:30:00.178928 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="extract-content" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.178945 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="extract-content" Feb 17 22:30:00 crc kubenswrapper[4793]: E0217 22:30:00.179005 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="registry-server" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.179021 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="registry-server" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.179454 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="44489fbb-0f72-46da-9021-ed6049554f9a" containerName="registry-server" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.180819 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.183698 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.183820 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.194305 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2"] Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.329388 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgxq\" (UniqueName: \"kubernetes.io/projected/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-kube-api-access-fpgxq\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.329730 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-config-volume\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.329931 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-secret-volume\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.432658 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpgxq\" (UniqueName: \"kubernetes.io/projected/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-kube-api-access-fpgxq\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.432728 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-config-volume\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.432756 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-secret-volume\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.433723 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-config-volume\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.439184 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-secret-volume\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.464327 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpgxq\" (UniqueName: \"kubernetes.io/projected/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-kube-api-access-fpgxq\") pod \"collect-profiles-29522790-kwtn2\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:00 crc kubenswrapper[4793]: I0217 22:30:00.506229 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:01 crc kubenswrapper[4793]: I0217 22:30:01.009509 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2"] Feb 17 22:30:01 crc kubenswrapper[4793]: I0217 22:30:01.736623 4793 generic.go:334] "Generic (PLEG): container finished" podID="a2d51e2d-55fc-4dc0-8124-11ac089f5d80" containerID="81ed534bae28c738effb185acf33b22ca65f553986df37eeb144daf0d28bd910" exitCode=0 Feb 17 22:30:01 crc kubenswrapper[4793]: I0217 22:30:01.737053 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" event={"ID":"a2d51e2d-55fc-4dc0-8124-11ac089f5d80","Type":"ContainerDied","Data":"81ed534bae28c738effb185acf33b22ca65f553986df37eeb144daf0d28bd910"} Feb 17 22:30:01 crc kubenswrapper[4793]: I0217 22:30:01.737095 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" event={"ID":"a2d51e2d-55fc-4dc0-8124-11ac089f5d80","Type":"ContainerStarted","Data":"9ba7d31b02b745a6112407b3049e2fb5ece7b79a88ae8607393a4e5288870dfb"} Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.101099 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.207861 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpgxq\" (UniqueName: \"kubernetes.io/projected/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-kube-api-access-fpgxq\") pod \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.208083 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-config-volume\") pod \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.208191 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-secret-volume\") pod \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\" (UID: \"a2d51e2d-55fc-4dc0-8124-11ac089f5d80\") " Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.208824 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2d51e2d-55fc-4dc0-8124-11ac089f5d80" (UID: "a2d51e2d-55fc-4dc0-8124-11ac089f5d80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.215361 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-kube-api-access-fpgxq" (OuterVolumeSpecName: "kube-api-access-fpgxq") pod "a2d51e2d-55fc-4dc0-8124-11ac089f5d80" (UID: "a2d51e2d-55fc-4dc0-8124-11ac089f5d80"). InnerVolumeSpecName "kube-api-access-fpgxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.217843 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2d51e2d-55fc-4dc0-8124-11ac089f5d80" (UID: "a2d51e2d-55fc-4dc0-8124-11ac089f5d80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.310551 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.310588 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpgxq\" (UniqueName: \"kubernetes.io/projected/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-kube-api-access-fpgxq\") on node \"crc\" DevicePath \"\"" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.310604 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2d51e2d-55fc-4dc0-8124-11ac089f5d80-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.761239 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" event={"ID":"a2d51e2d-55fc-4dc0-8124-11ac089f5d80","Type":"ContainerDied","Data":"9ba7d31b02b745a6112407b3049e2fb5ece7b79a88ae8607393a4e5288870dfb"} Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.761297 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba7d31b02b745a6112407b3049e2fb5ece7b79a88ae8607393a4e5288870dfb" Feb 17 22:30:03 crc kubenswrapper[4793]: I0217 22:30:03.761365 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522790-kwtn2" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.112190 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jpz5"] Feb 17 22:30:04 crc kubenswrapper[4793]: E0217 22:30:04.115672 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d51e2d-55fc-4dc0-8124-11ac089f5d80" containerName="collect-profiles" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.115713 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d51e2d-55fc-4dc0-8124-11ac089f5d80" containerName="collect-profiles" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.116026 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d51e2d-55fc-4dc0-8124-11ac089f5d80" containerName="collect-profiles" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.117739 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.130470 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbfql\" (UniqueName: \"kubernetes.io/projected/53df06a1-3857-4139-a122-e6380e600fc7-kube-api-access-gbfql\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.130786 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-utilities\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.130913 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-catalog-content\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.147438 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jpz5"] Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.217171 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn"] Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.224391 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522745-scvkn"] Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.233394 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbfql\" (UniqueName: \"kubernetes.io/projected/53df06a1-3857-4139-a122-e6380e600fc7-kube-api-access-gbfql\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.233885 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-utilities\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.234058 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-catalog-content\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.235035 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-catalog-content\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.235762 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-utilities\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.255449 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbfql\" (UniqueName: \"kubernetes.io/projected/53df06a1-3857-4139-a122-e6380e600fc7-kube-api-access-gbfql\") pod \"community-operators-5jpz5\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.451742 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:04 crc kubenswrapper[4793]: I0217 22:30:04.990740 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jpz5"] Feb 17 22:30:05 crc kubenswrapper[4793]: W0217 22:30:05.007001 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53df06a1_3857_4139_a122_e6380e600fc7.slice/crio-ea06db5027754c8a24d79b66180a3144107bed174c397788c875a82186f39086 WatchSource:0}: Error finding container ea06db5027754c8a24d79b66180a3144107bed174c397788c875a82186f39086: Status 404 returned error can't find the container with id ea06db5027754c8a24d79b66180a3144107bed174c397788c875a82186f39086 Feb 17 22:30:05 crc kubenswrapper[4793]: I0217 22:30:05.557772 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729cca98-7dbc-4621-a0e0-5ffbf3a59ffc" path="/var/lib/kubelet/pods/729cca98-7dbc-4621-a0e0-5ffbf3a59ffc/volumes" Feb 17 22:30:05 crc kubenswrapper[4793]: I0217 22:30:05.778836 4793 generic.go:334] "Generic (PLEG): container finished" podID="53df06a1-3857-4139-a122-e6380e600fc7" containerID="070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021" exitCode=0 Feb 17 22:30:05 crc kubenswrapper[4793]: I0217 22:30:05.778887 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerDied","Data":"070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021"} Feb 17 22:30:05 crc kubenswrapper[4793]: I0217 22:30:05.778919 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerStarted","Data":"ea06db5027754c8a24d79b66180a3144107bed174c397788c875a82186f39086"} Feb 17 22:30:07 crc kubenswrapper[4793]: I0217 22:30:07.539708 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:30:07 crc kubenswrapper[4793]: E0217 22:30:07.541214 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:30:07 crc kubenswrapper[4793]: I0217 22:30:07.804335 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerStarted","Data":"847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92"} Feb 17 22:30:08 crc kubenswrapper[4793]: I0217 22:30:08.539757 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:30:08 crc kubenswrapper[4793]: E0217 22:30:08.540393 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:30:08 crc kubenswrapper[4793]: I0217 22:30:08.818991 4793 generic.go:334] "Generic (PLEG): container finished" podID="53df06a1-3857-4139-a122-e6380e600fc7" containerID="847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92" exitCode=0 Feb 17 22:30:08 crc kubenswrapper[4793]: I0217 22:30:08.819092 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerDied","Data":"847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92"} Feb 17 22:30:09 crc kubenswrapper[4793]: I0217 22:30:09.829795 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerStarted","Data":"b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5"} Feb 17 22:30:14 crc kubenswrapper[4793]: I0217 22:30:14.452729 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:14 crc kubenswrapper[4793]: I0217 22:30:14.453070 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:14 crc kubenswrapper[4793]: I0217 22:30:14.517306 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:14 crc kubenswrapper[4793]: I0217 22:30:14.537615 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jpz5" podStartSLOduration=7.053251174 podStartE2EDuration="10.537600847s" podCreationTimestamp="2026-02-17 22:30:04 +0000 UTC" firstStartedPulling="2026-02-17 22:30:05.780767201 +0000 UTC m=+8481.072465512" lastFinishedPulling="2026-02-17 22:30:09.265116834 +0000 UTC m=+8484.556815185" observedRunningTime="2026-02-17 22:30:09.848486091 +0000 UTC m=+8485.140184402" watchObservedRunningTime="2026-02-17 22:30:14.537600847 +0000 UTC m=+8489.829299158" Feb 17 22:30:14 crc kubenswrapper[4793]: I0217 22:30:14.926582 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:14 crc kubenswrapper[4793]: I0217 22:30:14.976837 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jpz5"] Feb 17 22:30:16 crc kubenswrapper[4793]: I0217 22:30:16.903388 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jpz5" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="registry-server" containerID="cri-o://b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5" gracePeriod=2 Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.402344 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.462955 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-utilities\") pod \"53df06a1-3857-4139-a122-e6380e600fc7\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.463177 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbfql\" (UniqueName: \"kubernetes.io/projected/53df06a1-3857-4139-a122-e6380e600fc7-kube-api-access-gbfql\") pod \"53df06a1-3857-4139-a122-e6380e600fc7\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.463286 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-catalog-content\") pod \"53df06a1-3857-4139-a122-e6380e600fc7\" (UID: \"53df06a1-3857-4139-a122-e6380e600fc7\") " Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.465523 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-utilities" (OuterVolumeSpecName: "utilities") pod "53df06a1-3857-4139-a122-e6380e600fc7" (UID: "53df06a1-3857-4139-a122-e6380e600fc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.469984 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53df06a1-3857-4139-a122-e6380e600fc7-kube-api-access-gbfql" (OuterVolumeSpecName: "kube-api-access-gbfql") pod "53df06a1-3857-4139-a122-e6380e600fc7" (UID: "53df06a1-3857-4139-a122-e6380e600fc7"). InnerVolumeSpecName "kube-api-access-gbfql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.540922 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53df06a1-3857-4139-a122-e6380e600fc7" (UID: "53df06a1-3857-4139-a122-e6380e600fc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.567367 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.567398 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbfql\" (UniqueName: \"kubernetes.io/projected/53df06a1-3857-4139-a122-e6380e600fc7-kube-api-access-gbfql\") on node \"crc\" DevicePath \"\"" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.567409 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53df06a1-3857-4139-a122-e6380e600fc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.917178 4793 generic.go:334] "Generic (PLEG): container finished" podID="53df06a1-3857-4139-a122-e6380e600fc7" containerID="b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5" exitCode=0 Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.917564 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerDied","Data":"b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5"} Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.917605 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jpz5" event={"ID":"53df06a1-3857-4139-a122-e6380e600fc7","Type":"ContainerDied","Data":"ea06db5027754c8a24d79b66180a3144107bed174c397788c875a82186f39086"} Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.917635 4793 scope.go:117] "RemoveContainer" containerID="b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.917838 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jpz5" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.947832 4793 scope.go:117] "RemoveContainer" containerID="847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92" Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.950005 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jpz5"] Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.960999 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jpz5"] Feb 17 22:30:17 crc kubenswrapper[4793]: I0217 22:30:17.996041 4793 scope.go:117] "RemoveContainer" containerID="070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021" Feb 17 22:30:18 crc kubenswrapper[4793]: I0217 22:30:18.037374 4793 scope.go:117] "RemoveContainer" containerID="b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5" Feb 17 22:30:18 crc kubenswrapper[4793]: E0217 22:30:18.037926 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5\": container with ID starting with b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5 not found: ID does not exist" containerID="b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5" Feb 17 22:30:18 crc kubenswrapper[4793]: I0217 22:30:18.037964 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5"} err="failed to get container status \"b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5\": rpc error: code = NotFound desc = could not find container \"b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5\": container with ID starting with b636e5f695cd51678c8402386e573e5484cb77237cea958f4fd3de1782543ca5 not found: ID does not exist" Feb 17 22:30:18 crc kubenswrapper[4793]: I0217 22:30:18.037986 4793 scope.go:117] "RemoveContainer" containerID="847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92" Feb 17 22:30:18 crc kubenswrapper[4793]: E0217 22:30:18.038464 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92\": container with ID starting with 847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92 not found: ID does not exist" containerID="847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92" Feb 17 22:30:18 crc kubenswrapper[4793]: I0217 22:30:18.038488 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92"} err="failed to get container status \"847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92\": rpc error: code = NotFound desc = could not find container \"847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92\": container with ID starting with 847e61064dc5da138cd943a8235c60fb99dcda3c6f775f0509699ea77e5e5f92 not found: ID does not exist" Feb 17 22:30:18 crc kubenswrapper[4793]: I0217 22:30:18.038506 4793 scope.go:117] "RemoveContainer" containerID="070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021" Feb 17 22:30:18 crc kubenswrapper[4793]: E0217 22:30:18.038770 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021\": container with ID starting with 070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021 not found: ID does not exist" containerID="070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021" Feb 17 22:30:18 crc kubenswrapper[4793]: I0217 22:30:18.038831 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021"} err="failed to get container status \"070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021\": rpc error: code = NotFound desc = could not find container \"070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021\": container with ID starting with 070c8ad2b094a1a2f930211e8a2269134ef05b5e6624f249b81f08e9c673a021 not found: ID does not exist" Feb 17 22:30:19 crc kubenswrapper[4793]: I0217 22:30:19.563338 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53df06a1-3857-4139-a122-e6380e600fc7" path="/var/lib/kubelet/pods/53df06a1-3857-4139-a122-e6380e600fc7/volumes" Feb 17 22:30:22 crc kubenswrapper[4793]: I0217 22:30:22.538659 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:30:22 crc kubenswrapper[4793]: I0217 22:30:22.539344 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:30:22 crc kubenswrapper[4793]: E0217 22:30:22.539595 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:30:22 crc kubenswrapper[4793]: E0217 22:30:22.539660 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:30:27 crc kubenswrapper[4793]: I0217 22:30:27.063051 4793 scope.go:117] "RemoveContainer" containerID="8820ff16144d26a1ac6405a5f678667ac577a3f1476d70952dadb3929f1b3a10" Feb 17 22:30:33 crc kubenswrapper[4793]: I0217 22:30:33.539561 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:30:33 crc kubenswrapper[4793]: E0217 22:30:33.540452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:30:36 crc kubenswrapper[4793]: I0217 22:30:36.540134 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:30:36 crc kubenswrapper[4793]: E0217 22:30:36.540985 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:30:46 crc kubenswrapper[4793]: I0217 22:30:46.539590 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:30:46 crc kubenswrapper[4793]: E0217 22:30:46.541996 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:30:51 crc kubenswrapper[4793]: I0217 22:30:51.539806 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:30:51 crc kubenswrapper[4793]: E0217 22:30:51.541060 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:30:59 crc kubenswrapper[4793]: I0217 22:30:59.539810 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:30:59 crc kubenswrapper[4793]: E0217 22:30:59.541360 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:31:04 crc kubenswrapper[4793]: I0217 22:31:04.538911 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:31:04 crc kubenswrapper[4793]: E0217 22:31:04.539924 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:31:10 crc kubenswrapper[4793]: I0217 22:31:10.539765 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:31:10 crc kubenswrapper[4793]: E0217 22:31:10.541124 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:31:19 crc kubenswrapper[4793]: I0217 22:31:19.539036 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:31:19 crc kubenswrapper[4793]: E0217 22:31:19.539771 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:31:23 crc kubenswrapper[4793]: I0217 22:31:23.539981 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:31:23 crc kubenswrapper[4793]: E0217 22:31:23.541267 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:31:30 crc kubenswrapper[4793]: I0217 22:31:30.541036 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:31:30 crc kubenswrapper[4793]: E0217 22:31:30.542039 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:31:35 crc kubenswrapper[4793]: I0217 22:31:35.547142 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:31:35 crc kubenswrapper[4793]: E0217 22:31:35.548291 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:31:43 crc kubenswrapper[4793]: I0217 22:31:43.539848 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:31:43 crc kubenswrapper[4793]: E0217 22:31:43.540615 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:31:50 crc kubenswrapper[4793]: I0217 22:31:50.539868 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:31:50 crc kubenswrapper[4793]: E0217 22:31:50.541207 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:31:54 crc kubenswrapper[4793]: I0217 22:31:54.539324 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:31:54 crc kubenswrapper[4793]: E0217 22:31:54.540434 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:02 crc kubenswrapper[4793]: I0217 22:32:02.542393 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:32:02 crc kubenswrapper[4793]: E0217 22:32:02.543768 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:32:05 crc kubenswrapper[4793]: I0217 22:32:05.544842 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:32:05 crc kubenswrapper[4793]: E0217 22:32:05.545290 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:15 crc kubenswrapper[4793]: I0217 22:32:15.539588 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:32:15 crc kubenswrapper[4793]: E0217 22:32:15.541139 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:32:20 crc kubenswrapper[4793]: I0217 22:32:20.538953 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:32:21 crc kubenswrapper[4793]: I0217 22:32:21.487854 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0"} Feb 17 22:32:23 crc kubenswrapper[4793]: E0217 22:32:23.280128 4793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d26164_0fa4_4020_9224_b7760a490987.slice/crio-af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d26164_0fa4_4020_9224_b7760a490987.slice/crio-conmon-af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0.scope\": RecentStats: unable to find data in memory cache]" Feb 17 22:32:23 crc kubenswrapper[4793]: I0217 22:32:23.513150 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" exitCode=1 Feb 17 22:32:23 crc kubenswrapper[4793]: I0217 22:32:23.513200 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0"} Feb 17 22:32:23 crc kubenswrapper[4793]: I0217 22:32:23.513239 4793 scope.go:117] "RemoveContainer" containerID="51889aa5f0c36bc84e040f1560fcf3326a7169e6cdd8e68014582bda0b804063" Feb 17 22:32:23 crc kubenswrapper[4793]: I0217 22:32:23.514573 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:32:23 crc kubenswrapper[4793]: E0217 22:32:23.515227 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:25 crc kubenswrapper[4793]: I0217 22:32:25.596561 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:32:25 crc kubenswrapper[4793]: I0217 22:32:25.597445 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:32:25 crc kubenswrapper[4793]: I0217 22:32:25.597484 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:32:25 crc kubenswrapper[4793]: I0217 22:32:25.597515 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:32:25 crc kubenswrapper[4793]: I0217 22:32:25.598518 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:32:25 crc kubenswrapper[4793]: E0217 22:32:25.598975 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:26 crc kubenswrapper[4793]: I0217 22:32:26.547422 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:32:26 crc kubenswrapper[4793]: E0217 22:32:26.547901 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:29 crc kubenswrapper[4793]: I0217 22:32:29.539522 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:32:29 crc kubenswrapper[4793]: E0217 22:32:29.540299 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:32:41 crc kubenswrapper[4793]: I0217 22:32:41.540047 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:32:41 crc kubenswrapper[4793]: I0217 22:32:41.540874 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:32:41 crc kubenswrapper[4793]: E0217 22:32:41.541151 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:32:41 crc kubenswrapper[4793]: E0217 22:32:41.541408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:52 crc kubenswrapper[4793]: I0217 22:32:52.539956 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:32:52 crc kubenswrapper[4793]: E0217 22:32:52.540738 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:32:56 crc kubenswrapper[4793]: I0217 22:32:56.540141 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:32:56 crc kubenswrapper[4793]: E0217 22:32:56.541562 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:33:03 crc kubenswrapper[4793]: I0217 22:33:03.539861 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:33:03 crc kubenswrapper[4793]: E0217 22:33:03.540834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:33:07 crc kubenswrapper[4793]: I0217 22:33:07.546471 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:33:07 crc kubenswrapper[4793]: E0217 22:33:07.547414 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:33:18 crc kubenswrapper[4793]: I0217 22:33:18.538928 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:33:18 crc kubenswrapper[4793]: E0217 22:33:18.539745 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:33:19 crc kubenswrapper[4793]: I0217 22:33:19.539306 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:33:19 crc kubenswrapper[4793]: E0217 22:33:19.539965 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:33:31 crc kubenswrapper[4793]: I0217 22:33:31.539285 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:33:31 crc kubenswrapper[4793]: E0217 22:33:31.539958 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:33:32 crc kubenswrapper[4793]: I0217 22:33:32.539320 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:33:32 crc kubenswrapper[4793]: E0217 22:33:32.540086 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:33:43 crc kubenswrapper[4793]: I0217 22:33:43.540016 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:33:43 crc kubenswrapper[4793]: E0217 22:33:43.541479 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:33:46 crc kubenswrapper[4793]: I0217 22:33:46.546332 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:33:46 crc kubenswrapper[4793]: E0217 22:33:46.547237 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:33:54 crc kubenswrapper[4793]: I0217 22:33:54.539513 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:33:55 crc kubenswrapper[4793]: I0217 22:33:55.685189 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"ce726a8004526b4f7abac12c57d681f15f1df5245447e71e8edaf94aba7f88a7"} Feb 17 22:33:57 crc kubenswrapper[4793]: I0217 22:33:57.542481 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:33:57 crc kubenswrapper[4793]: E0217 22:33:57.543326 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:34:09 crc kubenswrapper[4793]: I0217 22:34:09.539092 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:34:09 crc kubenswrapper[4793]: E0217 22:34:09.539847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:34:22 crc kubenswrapper[4793]: I0217 22:34:22.538842 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:34:22 crc kubenswrapper[4793]: E0217 22:34:22.539681 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:34:37 crc kubenswrapper[4793]: I0217 22:34:37.544401 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:34:37 crc kubenswrapper[4793]: E0217 22:34:37.545167 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.334014 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-866cm"] Feb 17 22:34:42 crc kubenswrapper[4793]: E0217 22:34:42.335037 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="registry-server" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.335051 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="registry-server" Feb 17 22:34:42 crc kubenswrapper[4793]: E0217 22:34:42.335099 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="extract-content" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.335107 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="extract-content" Feb 17 22:34:42 crc kubenswrapper[4793]: E0217 22:34:42.335130 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="extract-utilities" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.335140 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="extract-utilities" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.335378 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="53df06a1-3857-4139-a122-e6380e600fc7" containerName="registry-server" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.337146 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.363298 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-866cm"] Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.462389 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-catalog-content\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.462517 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9vh\" (UniqueName: \"kubernetes.io/projected/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-kube-api-access-bj9vh\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.462648 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-utilities\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.565560 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9vh\" (UniqueName: \"kubernetes.io/projected/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-kube-api-access-bj9vh\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.565920 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-utilities\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.566057 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-catalog-content\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.566601 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-catalog-content\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.567269 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-utilities\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.588941 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9vh\" (UniqueName: \"kubernetes.io/projected/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-kube-api-access-bj9vh\") pod \"certified-operators-866cm\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:42 crc kubenswrapper[4793]: I0217 22:34:42.712930 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:43 crc kubenswrapper[4793]: I0217 22:34:43.312399 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-866cm"] Feb 17 22:34:43 crc kubenswrapper[4793]: W0217 22:34:43.327476 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e6c9ff4_368c_4da6_aed8_b43f3a2cc8cd.slice/crio-cd9f7f76265ccf2d6eb236f08b40e9aa2e9160eb6d223097c9c819f693493f21 WatchSource:0}: Error finding container cd9f7f76265ccf2d6eb236f08b40e9aa2e9160eb6d223097c9c819f693493f21: Status 404 returned error can't find the container with id cd9f7f76265ccf2d6eb236f08b40e9aa2e9160eb6d223097c9c819f693493f21 Feb 17 22:34:44 crc kubenswrapper[4793]: I0217 22:34:44.228866 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerID="48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624" exitCode=0 Feb 17 22:34:44 crc kubenswrapper[4793]: I0217 22:34:44.228918 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerDied","Data":"48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624"} Feb 17 22:34:44 crc kubenswrapper[4793]: I0217 22:34:44.230417 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerStarted","Data":"cd9f7f76265ccf2d6eb236f08b40e9aa2e9160eb6d223097c9c819f693493f21"} Feb 17 22:34:44 crc kubenswrapper[4793]: I0217 22:34:44.231269 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:34:45 crc kubenswrapper[4793]: I0217 22:34:45.246108 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerStarted","Data":"95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085"} Feb 17 22:34:46 crc kubenswrapper[4793]: I0217 22:34:46.263125 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerID="95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085" exitCode=0 Feb 17 22:34:46 crc kubenswrapper[4793]: I0217 22:34:46.263232 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerDied","Data":"95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085"} Feb 17 22:34:47 crc kubenswrapper[4793]: I0217 22:34:47.280114 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerStarted","Data":"2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173"} Feb 17 22:34:47 crc kubenswrapper[4793]: I0217 22:34:47.320680 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-866cm" podStartSLOduration=2.793624905 podStartE2EDuration="5.320645766s" podCreationTimestamp="2026-02-17 22:34:42 +0000 UTC" firstStartedPulling="2026-02-17 22:34:44.230928355 +0000 UTC m=+8759.522626666" lastFinishedPulling="2026-02-17 22:34:46.757949176 +0000 UTC m=+8762.049647527" observedRunningTime="2026-02-17 22:34:47.299292719 +0000 UTC m=+8762.590991070" watchObservedRunningTime="2026-02-17 22:34:47.320645766 +0000 UTC m=+8762.612344117" Feb 17 22:34:50 crc kubenswrapper[4793]: I0217 22:34:50.539329 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:34:50 crc kubenswrapper[4793]: E0217 22:34:50.540472 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:34:52 crc kubenswrapper[4793]: I0217 22:34:52.714257 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:52 crc kubenswrapper[4793]: I0217 22:34:52.714845 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:52 crc kubenswrapper[4793]: I0217 22:34:52.785788 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:53 crc kubenswrapper[4793]: I0217 22:34:53.403152 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:53 crc kubenswrapper[4793]: I0217 22:34:53.458140 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-866cm"] Feb 17 22:34:55 crc kubenswrapper[4793]: I0217 22:34:55.367402 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-866cm" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="registry-server" containerID="cri-o://2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173" gracePeriod=2 Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.315884 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.376827 4793 generic.go:334] "Generic (PLEG): container finished" podID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerID="2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173" exitCode=0 Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.376866 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerDied","Data":"2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173"} Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.376890 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-866cm" event={"ID":"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd","Type":"ContainerDied","Data":"cd9f7f76265ccf2d6eb236f08b40e9aa2e9160eb6d223097c9c819f693493f21"} Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.376907 4793 scope.go:117] "RemoveContainer" containerID="2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.377022 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-866cm" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.407891 4793 scope.go:117] "RemoveContainer" containerID="95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.432280 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-catalog-content\") pod \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.432365 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-utilities\") pod \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.432392 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9vh\" (UniqueName: \"kubernetes.io/projected/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-kube-api-access-bj9vh\") pod \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\" (UID: \"7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd\") " Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.439361 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-utilities" (OuterVolumeSpecName: "utilities") pod "7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" (UID: "7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.442204 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-kube-api-access-bj9vh" (OuterVolumeSpecName: "kube-api-access-bj9vh") pod "7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" (UID: "7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd"). InnerVolumeSpecName "kube-api-access-bj9vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.447358 4793 scope.go:117] "RemoveContainer" containerID="48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.502445 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" (UID: "7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.527937 4793 scope.go:117] "RemoveContainer" containerID="2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173" Feb 17 22:34:56 crc kubenswrapper[4793]: E0217 22:34:56.528449 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173\": container with ID starting with 2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173 not found: ID does not exist" containerID="2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.528513 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173"} err="failed to get container status \"2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173\": rpc error: code = NotFound desc = could not find container \"2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173\": container with ID starting with 2f1e119f909d1c6da79fee4f928a26bb23cb864be15bccfad8a843c039a86173 not found: ID does not exist" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.528553 4793 scope.go:117] "RemoveContainer" containerID="95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085" Feb 17 22:34:56 crc kubenswrapper[4793]: E0217 22:34:56.528942 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085\": container with ID starting with 95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085 not found: ID does not exist" containerID="95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.528972 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085"} err="failed to get container status \"95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085\": rpc error: code = NotFound desc = could not find container \"95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085\": container with ID starting with 95e90d17f38e2f557aeedb5b714c8f7b8e1aaa6dee31c9dac626b5bbf4de0085 not found: ID does not exist" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.528996 4793 scope.go:117] "RemoveContainer" containerID="48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624" Feb 17 22:34:56 crc kubenswrapper[4793]: E0217 22:34:56.529287 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624\": container with ID starting with 48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624 not found: ID does not exist" containerID="48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.529311 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624"} err="failed to get container status \"48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624\": rpc error: code = NotFound desc = could not find container \"48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624\": container with ID starting with 48e4be71d65d61d18ad330ad4934966ec89a5d78a32f29f4512e450bf3e3d624 not found: ID does not exist" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.534591 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.534612 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.534622 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9vh\" (UniqueName: \"kubernetes.io/projected/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd-kube-api-access-bj9vh\") on node \"crc\" DevicePath \"\"" Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.716920 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-866cm"] Feb 17 22:34:56 crc kubenswrapper[4793]: I0217 22:34:56.726203 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-866cm"] Feb 17 22:34:57 crc kubenswrapper[4793]: I0217 22:34:57.571362 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" path="/var/lib/kubelet/pods/7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd/volumes" Feb 17 22:35:04 crc kubenswrapper[4793]: I0217 22:35:04.539216 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:35:04 crc kubenswrapper[4793]: E0217 22:35:04.539977 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:35:18 crc kubenswrapper[4793]: I0217 22:35:18.539396 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:35:18 crc kubenswrapper[4793]: E0217 22:35:18.540490 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:35:33 crc kubenswrapper[4793]: I0217 22:35:33.539896 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:35:33 crc kubenswrapper[4793]: E0217 22:35:33.540995 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:35:44 crc kubenswrapper[4793]: I0217 22:35:44.539455 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:35:44 crc kubenswrapper[4793]: E0217 22:35:44.540250 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:35:55 crc kubenswrapper[4793]: I0217 22:35:55.549321 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:35:55 crc kubenswrapper[4793]: E0217 22:35:55.550388 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:36:09 crc kubenswrapper[4793]: I0217 22:36:09.539537 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:36:09 crc kubenswrapper[4793]: E0217 22:36:09.540608 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:36:20 crc kubenswrapper[4793]: I0217 22:36:20.101498 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:36:20 crc kubenswrapper[4793]: I0217 22:36:20.102193 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:36:23 crc kubenswrapper[4793]: I0217 22:36:23.539675 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:36:23 crc kubenswrapper[4793]: E0217 22:36:23.540498 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:36:38 crc kubenswrapper[4793]: I0217 22:36:38.538438 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:36:38 crc kubenswrapper[4793]: E0217 22:36:38.539484 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:36:50 crc kubenswrapper[4793]: I0217 22:36:50.102249 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:36:50 crc kubenswrapper[4793]: I0217 22:36:50.103072 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:36:51 crc kubenswrapper[4793]: I0217 22:36:51.539565 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:36:51 crc kubenswrapper[4793]: E0217 22:36:51.540399 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:37:06 crc kubenswrapper[4793]: I0217 22:37:06.539181 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:37:06 crc kubenswrapper[4793]: E0217 22:37:06.540179 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:37:19 crc kubenswrapper[4793]: I0217 22:37:19.542715 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:37:19 crc kubenswrapper[4793]: E0217 22:37:19.543504 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:37:20 crc kubenswrapper[4793]: I0217 22:37:20.101652 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:37:20 crc kubenswrapper[4793]: I0217 22:37:20.101735 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:37:20 crc kubenswrapper[4793]: I0217 22:37:20.101782 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:37:20 crc kubenswrapper[4793]: I0217 22:37:20.102633 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce726a8004526b4f7abac12c57d681f15f1df5245447e71e8edaf94aba7f88a7"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:37:20 crc kubenswrapper[4793]: I0217 22:37:20.102731 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://ce726a8004526b4f7abac12c57d681f15f1df5245447e71e8edaf94aba7f88a7" gracePeriod=600 Feb 17 22:37:21 crc kubenswrapper[4793]: I0217 22:37:21.082758 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="ce726a8004526b4f7abac12c57d681f15f1df5245447e71e8edaf94aba7f88a7" exitCode=0 Feb 17 22:37:21 crc kubenswrapper[4793]: I0217 22:37:21.082845 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"ce726a8004526b4f7abac12c57d681f15f1df5245447e71e8edaf94aba7f88a7"} Feb 17 22:37:21 crc kubenswrapper[4793]: I0217 22:37:21.083288 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688"} Feb 17 22:37:21 crc kubenswrapper[4793]: I0217 22:37:21.083314 4793 scope.go:117] "RemoveContainer" containerID="ebd203a3aacb1d24c26bf0360750132c759499b33eb9544d7559e8073681ee65" Feb 17 22:37:31 crc kubenswrapper[4793]: I0217 22:37:31.546239 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:37:32 crc kubenswrapper[4793]: I0217 22:37:32.200335 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e"} Feb 17 22:37:34 crc kubenswrapper[4793]: I0217 22:37:34.229093 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" exitCode=1 Feb 17 22:37:34 crc kubenswrapper[4793]: I0217 22:37:34.229184 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e"} Feb 17 22:37:34 crc kubenswrapper[4793]: I0217 22:37:34.229422 4793 scope.go:117] "RemoveContainer" containerID="af958e6b67e728b1e46c3822aa11f37476214447fb35e8eef1a2a3e3f5dfbed0" Feb 17 22:37:34 crc kubenswrapper[4793]: I0217 22:37:34.231174 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:37:34 crc kubenswrapper[4793]: E0217 22:37:34.231758 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:37:35 crc kubenswrapper[4793]: I0217 22:37:35.596767 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:37:35 crc kubenswrapper[4793]: I0217 22:37:35.597197 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:37:35 crc kubenswrapper[4793]: I0217 22:37:35.597214 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:37:35 crc kubenswrapper[4793]: I0217 22:37:35.597232 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:37:35 crc kubenswrapper[4793]: I0217 22:37:35.598090 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:37:35 crc kubenswrapper[4793]: E0217 22:37:35.598372 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:37:47 crc kubenswrapper[4793]: I0217 22:37:47.538953 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:37:47 crc kubenswrapper[4793]: E0217 22:37:47.539749 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:37:59 crc kubenswrapper[4793]: I0217 22:37:59.539821 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:37:59 crc kubenswrapper[4793]: E0217 22:37:59.540941 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:38:13 crc kubenswrapper[4793]: I0217 22:38:13.539681 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:38:13 crc kubenswrapper[4793]: E0217 22:38:13.540395 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:38:28 crc kubenswrapper[4793]: I0217 22:38:28.539828 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:38:28 crc kubenswrapper[4793]: E0217 22:38:28.541013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:38:42 crc kubenswrapper[4793]: I0217 22:38:42.538340 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:38:42 crc kubenswrapper[4793]: E0217 22:38:42.539109 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:38:55 crc kubenswrapper[4793]: I0217 22:38:55.547128 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:38:55 crc kubenswrapper[4793]: E0217 22:38:55.548064 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:39:08 crc kubenswrapper[4793]: I0217 22:39:08.539107 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:39:08 crc kubenswrapper[4793]: E0217 22:39:08.540397 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:39:20 crc kubenswrapper[4793]: I0217 22:39:20.102356 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:39:20 crc kubenswrapper[4793]: I0217 22:39:20.103002 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:39:23 crc kubenswrapper[4793]: I0217 22:39:23.539724 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:39:23 crc kubenswrapper[4793]: E0217 22:39:23.540255 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:39:34 crc kubenswrapper[4793]: I0217 22:39:34.539264 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:39:34 crc kubenswrapper[4793]: E0217 22:39:34.540445 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:39:45 crc kubenswrapper[4793]: I0217 22:39:45.547229 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:39:45 crc kubenswrapper[4793]: E0217 22:39:45.548056 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:39:50 crc kubenswrapper[4793]: I0217 22:39:50.102494 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:39:50 crc kubenswrapper[4793]: I0217 22:39:50.103585 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.610342 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4zbf4"] Feb 17 22:39:58 crc kubenswrapper[4793]: E0217 22:39:58.612298 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="extract-content" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.612332 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="extract-content" Feb 17 22:39:58 crc kubenswrapper[4793]: E0217 22:39:58.612371 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="registry-server" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.612391 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="registry-server" Feb 17 22:39:58 crc kubenswrapper[4793]: E0217 22:39:58.612434 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="extract-utilities" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.612452 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="extract-utilities" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.612857 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6c9ff4-368c-4da6-aed8-b43f3a2cc8cd" containerName="registry-server" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.616390 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.637855 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zbf4"] Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.716538 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-catalog-content\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.716585 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-utilities\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.716636 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhps4\" (UniqueName: \"kubernetes.io/projected/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-kube-api-access-hhps4\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.819225 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-catalog-content\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.819282 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-utilities\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.819320 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhps4\" (UniqueName: \"kubernetes.io/projected/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-kube-api-access-hhps4\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.819836 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-catalog-content\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.819929 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-utilities\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.851927 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhps4\" (UniqueName: \"kubernetes.io/projected/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-kube-api-access-hhps4\") pod \"redhat-marketplace-4zbf4\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:58 crc kubenswrapper[4793]: I0217 22:39:58.945457 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:39:59 crc kubenswrapper[4793]: I0217 22:39:59.443078 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zbf4"] Feb 17 22:40:00 crc kubenswrapper[4793]: I0217 22:40:00.096594 4793 generic.go:334] "Generic (PLEG): container finished" podID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerID="974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae" exitCode=0 Feb 17 22:40:00 crc kubenswrapper[4793]: I0217 22:40:00.096732 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerDied","Data":"974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae"} Feb 17 22:40:00 crc kubenswrapper[4793]: I0217 22:40:00.097364 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerStarted","Data":"bc62a1f82b14c5ac230ff8fca734eb0677a2d18ef8ddbedf8e945952fc84966c"} Feb 17 22:40:00 crc kubenswrapper[4793]: I0217 22:40:00.117774 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:40:00 crc kubenswrapper[4793]: I0217 22:40:00.539372 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:40:00 crc kubenswrapper[4793]: E0217 22:40:00.539870 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:40:01 crc kubenswrapper[4793]: I0217 22:40:01.111801 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerStarted","Data":"3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f"} Feb 17 22:40:02 crc kubenswrapper[4793]: I0217 22:40:02.140988 4793 generic.go:334] "Generic (PLEG): container finished" podID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerID="3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f" exitCode=0 Feb 17 22:40:02 crc kubenswrapper[4793]: I0217 22:40:02.141030 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerDied","Data":"3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f"} Feb 17 22:40:03 crc kubenswrapper[4793]: I0217 22:40:03.156641 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerStarted","Data":"d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2"} Feb 17 22:40:03 crc kubenswrapper[4793]: I0217 22:40:03.193136 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4zbf4" podStartSLOduration=2.724951689 podStartE2EDuration="5.193104478s" podCreationTimestamp="2026-02-17 22:39:58 +0000 UTC" firstStartedPulling="2026-02-17 22:40:00.101392826 +0000 UTC m=+9075.393091187" lastFinishedPulling="2026-02-17 22:40:02.569545645 +0000 UTC m=+9077.861243976" observedRunningTime="2026-02-17 22:40:03.181275376 +0000 UTC m=+9078.472973687" watchObservedRunningTime="2026-02-17 22:40:03.193104478 +0000 UTC m=+9078.484802829" Feb 17 22:40:08 crc kubenswrapper[4793]: I0217 22:40:08.946352 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:40:08 crc kubenswrapper[4793]: I0217 22:40:08.947206 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:40:09 crc kubenswrapper[4793]: I0217 22:40:09.006632 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:40:09 crc kubenswrapper[4793]: I0217 22:40:09.313907 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:40:09 crc kubenswrapper[4793]: I0217 22:40:09.384917 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zbf4"] Feb 17 22:40:11 crc kubenswrapper[4793]: I0217 22:40:11.269392 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4zbf4" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="registry-server" containerID="cri-o://d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2" gracePeriod=2 Feb 17 22:40:11 crc kubenswrapper[4793]: I0217 22:40:11.912934 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.031096 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-utilities\") pod \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.031729 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-catalog-content\") pod \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.031915 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhps4\" (UniqueName: \"kubernetes.io/projected/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-kube-api-access-hhps4\") pod \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\" (UID: \"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9\") " Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.032623 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-utilities" (OuterVolumeSpecName: "utilities") pod "c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" (UID: "c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.032869 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.045245 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-kube-api-access-hhps4" (OuterVolumeSpecName: "kube-api-access-hhps4") pod "c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" (UID: "c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9"). InnerVolumeSpecName "kube-api-access-hhps4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.074044 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" (UID: "c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.135631 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.135776 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhps4\" (UniqueName: \"kubernetes.io/projected/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9-kube-api-access-hhps4\") on node \"crc\" DevicePath \"\"" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.284597 4793 generic.go:334] "Generic (PLEG): container finished" podID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerID="d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2" exitCode=0 Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.284716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerDied","Data":"d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2"} Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.284796 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zbf4" event={"ID":"c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9","Type":"ContainerDied","Data":"bc62a1f82b14c5ac230ff8fca734eb0677a2d18ef8ddbedf8e945952fc84966c"} Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.284836 4793 scope.go:117] "RemoveContainer" containerID="d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.284653 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zbf4" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.341194 4793 scope.go:117] "RemoveContainer" containerID="3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.370033 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zbf4"] Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.383905 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zbf4"] Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.402779 4793 scope.go:117] "RemoveContainer" containerID="974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.448047 4793 scope.go:117] "RemoveContainer" containerID="d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2" Feb 17 22:40:12 crc kubenswrapper[4793]: E0217 22:40:12.448748 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2\": container with ID starting with d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2 not found: ID does not exist" containerID="d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.448807 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2"} err="failed to get container status \"d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2\": rpc error: code = NotFound desc = could not find container \"d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2\": container with ID starting with d6111ed6d0a087a9ec1a1b9760341afc9f97bed7afdb53d85ca9289948a5f8c2 not found: ID does not exist" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.448843 4793 scope.go:117] "RemoveContainer" containerID="3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f" Feb 17 22:40:12 crc kubenswrapper[4793]: E0217 22:40:12.449362 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f\": container with ID starting with 3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f not found: ID does not exist" containerID="3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.449405 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f"} err="failed to get container status \"3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f\": rpc error: code = NotFound desc = could not find container \"3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f\": container with ID starting with 3b47b6ba8d991add3e002cd33aa9308242315df7439412cb694171b36ffe273f not found: ID does not exist" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.449458 4793 scope.go:117] "RemoveContainer" containerID="974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae" Feb 17 22:40:12 crc kubenswrapper[4793]: E0217 22:40:12.450252 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae\": container with ID starting with 974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae not found: ID does not exist" containerID="974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae" Feb 17 22:40:12 crc kubenswrapper[4793]: I0217 22:40:12.450316 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae"} err="failed to get container status \"974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae\": rpc error: code = NotFound desc = could not find container \"974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae\": container with ID starting with 974bc4b0fc122d2dcd9a06778e39fe5eeb2d31ea7437f796dcca15f659be96ae not found: ID does not exist" Feb 17 22:40:13 crc kubenswrapper[4793]: I0217 22:40:13.539883 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:40:13 crc kubenswrapper[4793]: E0217 22:40:13.540645 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:40:13 crc kubenswrapper[4793]: I0217 22:40:13.560775 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" path="/var/lib/kubelet/pods/c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9/volumes" Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.102771 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.103578 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.103650 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.104953 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.105069 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" gracePeriod=600 Feb 17 22:40:20 crc kubenswrapper[4793]: E0217 22:40:20.240770 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.413371 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" exitCode=0 Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.413441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688"} Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.413500 4793 scope.go:117] "RemoveContainer" containerID="ce726a8004526b4f7abac12c57d681f15f1df5245447e71e8edaf94aba7f88a7" Feb 17 22:40:20 crc kubenswrapper[4793]: I0217 22:40:20.414535 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:40:20 crc kubenswrapper[4793]: E0217 22:40:20.415671 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:40:28 crc kubenswrapper[4793]: I0217 22:40:28.539770 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:40:28 crc kubenswrapper[4793]: E0217 22:40:28.542860 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:40:31 crc kubenswrapper[4793]: I0217 22:40:31.538832 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:40:31 crc kubenswrapper[4793]: E0217 22:40:31.540361 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.180338 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hwf52"] Feb 17 22:40:33 crc kubenswrapper[4793]: E0217 22:40:33.181846 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="extract-utilities" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.181888 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="extract-utilities" Feb 17 22:40:33 crc kubenswrapper[4793]: E0217 22:40:33.181953 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="extract-content" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.181971 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="extract-content" Feb 17 22:40:33 crc kubenswrapper[4793]: E0217 22:40:33.181993 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="registry-server" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.182011 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="registry-server" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.182581 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b8d52f-b62f-4157-9a8e-68bfd8ea46c9" containerName="registry-server" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.184949 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.209533 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwf52"] Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.299758 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-catalog-content\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.299882 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mlxk\" (UniqueName: \"kubernetes.io/projected/e5c789ed-a003-41ee-a02e-0df5202113aa-kube-api-access-6mlxk\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.299961 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-utilities\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.401928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-catalog-content\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.402043 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mlxk\" (UniqueName: \"kubernetes.io/projected/e5c789ed-a003-41ee-a02e-0df5202113aa-kube-api-access-6mlxk\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.402124 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-utilities\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.402851 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-utilities\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.402857 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-catalog-content\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.428402 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mlxk\" (UniqueName: \"kubernetes.io/projected/e5c789ed-a003-41ee-a02e-0df5202113aa-kube-api-access-6mlxk\") pod \"community-operators-hwf52\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:33 crc kubenswrapper[4793]: I0217 22:40:33.522744 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:34 crc kubenswrapper[4793]: I0217 22:40:34.047359 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwf52"] Feb 17 22:40:34 crc kubenswrapper[4793]: I0217 22:40:34.631672 4793 generic.go:334] "Generic (PLEG): container finished" podID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerID="92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54" exitCode=0 Feb 17 22:40:34 crc kubenswrapper[4793]: I0217 22:40:34.631807 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerDied","Data":"92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54"} Feb 17 22:40:34 crc kubenswrapper[4793]: I0217 22:40:34.634848 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerStarted","Data":"732fbde793280b3baa1cbc8dd11e71b63889bec3d293787ff96d0bce0d468c73"} Feb 17 22:40:35 crc kubenswrapper[4793]: I0217 22:40:35.646314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerStarted","Data":"c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81"} Feb 17 22:40:37 crc kubenswrapper[4793]: I0217 22:40:37.672979 4793 generic.go:334] "Generic (PLEG): container finished" podID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerID="c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81" exitCode=0 Feb 17 22:40:37 crc kubenswrapper[4793]: I0217 22:40:37.673085 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerDied","Data":"c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81"} Feb 17 22:40:38 crc kubenswrapper[4793]: I0217 22:40:38.690814 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerStarted","Data":"12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0"} Feb 17 22:40:38 crc kubenswrapper[4793]: I0217 22:40:38.732869 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hwf52" podStartSLOduration=2.289646909 podStartE2EDuration="5.732844661s" podCreationTimestamp="2026-02-17 22:40:33 +0000 UTC" firstStartedPulling="2026-02-17 22:40:34.633828109 +0000 UTC m=+9109.925526440" lastFinishedPulling="2026-02-17 22:40:38.077025871 +0000 UTC m=+9113.368724192" observedRunningTime="2026-02-17 22:40:38.719325828 +0000 UTC m=+9114.011024179" watchObservedRunningTime="2026-02-17 22:40:38.732844661 +0000 UTC m=+9114.024542982" Feb 17 22:40:39 crc kubenswrapper[4793]: I0217 22:40:39.938853 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkmtx"] Feb 17 22:40:39 crc kubenswrapper[4793]: I0217 22:40:39.942282 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:39 crc kubenswrapper[4793]: I0217 22:40:39.964851 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkmtx"] Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.064148 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sbmf\" (UniqueName: \"kubernetes.io/projected/925014bc-f4bf-4589-92de-cdd9abb18bd0-kube-api-access-8sbmf\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.064519 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-catalog-content\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.064554 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-utilities\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.166469 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sbmf\" (UniqueName: \"kubernetes.io/projected/925014bc-f4bf-4589-92de-cdd9abb18bd0-kube-api-access-8sbmf\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.166535 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-catalog-content\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.166586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-utilities\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.167110 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-catalog-content\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.167418 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-utilities\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.197389 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sbmf\" (UniqueName: \"kubernetes.io/projected/925014bc-f4bf-4589-92de-cdd9abb18bd0-kube-api-access-8sbmf\") pod \"redhat-operators-wkmtx\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.272299 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:40 crc kubenswrapper[4793]: I0217 22:40:40.802104 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkmtx"] Feb 17 22:40:41 crc kubenswrapper[4793]: I0217 22:40:41.722315 4793 generic.go:334] "Generic (PLEG): container finished" podID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerID="21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9" exitCode=0 Feb 17 22:40:41 crc kubenswrapper[4793]: I0217 22:40:41.722423 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerDied","Data":"21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9"} Feb 17 22:40:41 crc kubenswrapper[4793]: I0217 22:40:41.722670 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerStarted","Data":"cbf565d5d66a4b23e78cc293cde519f931481b7f295621944a22232d94e98418"} Feb 17 22:40:42 crc kubenswrapper[4793]: I0217 22:40:42.735796 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerStarted","Data":"bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49"} Feb 17 22:40:43 crc kubenswrapper[4793]: I0217 22:40:43.523825 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:43 crc kubenswrapper[4793]: I0217 22:40:43.524274 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:43 crc kubenswrapper[4793]: I0217 22:40:43.539307 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:40:43 crc kubenswrapper[4793]: I0217 22:40:43.539554 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:40:43 crc kubenswrapper[4793]: E0217 22:40:43.539681 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:40:43 crc kubenswrapper[4793]: E0217 22:40:43.540233 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:40:43 crc kubenswrapper[4793]: I0217 22:40:43.601436 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:43 crc kubenswrapper[4793]: I0217 22:40:43.839637 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:45 crc kubenswrapper[4793]: I0217 22:40:45.138322 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwf52"] Feb 17 22:40:45 crc kubenswrapper[4793]: I0217 22:40:45.775402 4793 generic.go:334] "Generic (PLEG): container finished" podID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerID="bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49" exitCode=0 Feb 17 22:40:45 crc kubenswrapper[4793]: I0217 22:40:45.775504 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerDied","Data":"bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49"} Feb 17 22:40:45 crc kubenswrapper[4793]: I0217 22:40:45.775757 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hwf52" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="registry-server" containerID="cri-o://12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0" gracePeriod=2 Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.327238 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.428648 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-utilities\") pod \"e5c789ed-a003-41ee-a02e-0df5202113aa\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.428932 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-catalog-content\") pod \"e5c789ed-a003-41ee-a02e-0df5202113aa\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.429367 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mlxk\" (UniqueName: \"kubernetes.io/projected/e5c789ed-a003-41ee-a02e-0df5202113aa-kube-api-access-6mlxk\") pod \"e5c789ed-a003-41ee-a02e-0df5202113aa\" (UID: \"e5c789ed-a003-41ee-a02e-0df5202113aa\") " Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.430380 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-utilities" (OuterVolumeSpecName: "utilities") pod "e5c789ed-a003-41ee-a02e-0df5202113aa" (UID: "e5c789ed-a003-41ee-a02e-0df5202113aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.430738 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.437446 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c789ed-a003-41ee-a02e-0df5202113aa-kube-api-access-6mlxk" (OuterVolumeSpecName: "kube-api-access-6mlxk") pod "e5c789ed-a003-41ee-a02e-0df5202113aa" (UID: "e5c789ed-a003-41ee-a02e-0df5202113aa"). InnerVolumeSpecName "kube-api-access-6mlxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.476505 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5c789ed-a003-41ee-a02e-0df5202113aa" (UID: "e5c789ed-a003-41ee-a02e-0df5202113aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.532229 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c789ed-a003-41ee-a02e-0df5202113aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.532265 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mlxk\" (UniqueName: \"kubernetes.io/projected/e5c789ed-a003-41ee-a02e-0df5202113aa-kube-api-access-6mlxk\") on node \"crc\" DevicePath \"\"" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.790312 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerStarted","Data":"d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05"} Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.793999 4793 generic.go:334] "Generic (PLEG): container finished" podID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerID="12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0" exitCode=0 Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.794044 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerDied","Data":"12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0"} Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.794070 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwf52" event={"ID":"e5c789ed-a003-41ee-a02e-0df5202113aa","Type":"ContainerDied","Data":"732fbde793280b3baa1cbc8dd11e71b63889bec3d293787ff96d0bce0d468c73"} Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.794090 4793 scope.go:117] "RemoveContainer" containerID="12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.794209 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwf52" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.828624 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkmtx" podStartSLOduration=3.385394405 podStartE2EDuration="7.82859434s" podCreationTimestamp="2026-02-17 22:40:39 +0000 UTC" firstStartedPulling="2026-02-17 22:40:41.725237639 +0000 UTC m=+9117.016935980" lastFinishedPulling="2026-02-17 22:40:46.168437604 +0000 UTC m=+9121.460135915" observedRunningTime="2026-02-17 22:40:46.817931226 +0000 UTC m=+9122.109629527" watchObservedRunningTime="2026-02-17 22:40:46.82859434 +0000 UTC m=+9122.120292681" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.852774 4793 scope.go:117] "RemoveContainer" containerID="c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.853980 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwf52"] Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.862340 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hwf52"] Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.873541 4793 scope.go:117] "RemoveContainer" containerID="92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.924961 4793 scope.go:117] "RemoveContainer" containerID="12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0" Feb 17 22:40:46 crc kubenswrapper[4793]: E0217 22:40:46.925846 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0\": container with ID starting with 12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0 not found: ID does not exist" containerID="12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.925898 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0"} err="failed to get container status \"12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0\": rpc error: code = NotFound desc = could not find container \"12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0\": container with ID starting with 12088c017a295827ffd8bbf6012c6d5e5b5e0a0e9cc16504be8be14d4bce30b0 not found: ID does not exist" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.925926 4793 scope.go:117] "RemoveContainer" containerID="c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81" Feb 17 22:40:46 crc kubenswrapper[4793]: E0217 22:40:46.926345 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81\": container with ID starting with c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81 not found: ID does not exist" containerID="c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.926386 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81"} err="failed to get container status \"c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81\": rpc error: code = NotFound desc = could not find container \"c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81\": container with ID starting with c77955648b163fda7a8e043db0d608df751d4df381885e2f98bac36dc187da81 not found: ID does not exist" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.926412 4793 scope.go:117] "RemoveContainer" containerID="92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54" Feb 17 22:40:46 crc kubenswrapper[4793]: E0217 22:40:46.926850 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54\": container with ID starting with 92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54 not found: ID does not exist" containerID="92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54" Feb 17 22:40:46 crc kubenswrapper[4793]: I0217 22:40:46.926886 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54"} err="failed to get container status \"92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54\": rpc error: code = NotFound desc = could not find container \"92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54\": container with ID starting with 92fec39c6758d54e77792dad580ad434cfcb9f804934a2dd7e437d2bf37c9e54 not found: ID does not exist" Feb 17 22:40:47 crc kubenswrapper[4793]: I0217 22:40:47.558434 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" path="/var/lib/kubelet/pods/e5c789ed-a003-41ee-a02e-0df5202113aa/volumes" Feb 17 22:40:50 crc kubenswrapper[4793]: I0217 22:40:50.272774 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:50 crc kubenswrapper[4793]: I0217 22:40:50.273350 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:40:51 crc kubenswrapper[4793]: I0217 22:40:51.359021 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmtx" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="registry-server" probeResult="failure" output=< Feb 17 22:40:51 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 22:40:51 crc kubenswrapper[4793]: > Feb 17 22:40:58 crc kubenswrapper[4793]: I0217 22:40:58.540529 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:40:58 crc kubenswrapper[4793]: E0217 22:40:58.541521 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:40:58 crc kubenswrapper[4793]: I0217 22:40:58.541575 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:40:58 crc kubenswrapper[4793]: E0217 22:40:58.542150 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:41:00 crc kubenswrapper[4793]: I0217 22:41:00.345865 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:41:00 crc kubenswrapper[4793]: I0217 22:41:00.437083 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:41:00 crc kubenswrapper[4793]: I0217 22:41:00.611934 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkmtx"] Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.010628 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wkmtx" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="registry-server" containerID="cri-o://d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05" gracePeriod=2 Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.547154 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.598228 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-utilities\") pod \"925014bc-f4bf-4589-92de-cdd9abb18bd0\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.598571 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-catalog-content\") pod \"925014bc-f4bf-4589-92de-cdd9abb18bd0\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.598645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sbmf\" (UniqueName: \"kubernetes.io/projected/925014bc-f4bf-4589-92de-cdd9abb18bd0-kube-api-access-8sbmf\") pod \"925014bc-f4bf-4589-92de-cdd9abb18bd0\" (UID: \"925014bc-f4bf-4589-92de-cdd9abb18bd0\") " Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.601046 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-utilities" (OuterVolumeSpecName: "utilities") pod "925014bc-f4bf-4589-92de-cdd9abb18bd0" (UID: "925014bc-f4bf-4589-92de-cdd9abb18bd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.604558 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925014bc-f4bf-4589-92de-cdd9abb18bd0-kube-api-access-8sbmf" (OuterVolumeSpecName: "kube-api-access-8sbmf") pod "925014bc-f4bf-4589-92de-cdd9abb18bd0" (UID: "925014bc-f4bf-4589-92de-cdd9abb18bd0"). InnerVolumeSpecName "kube-api-access-8sbmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.701082 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sbmf\" (UniqueName: \"kubernetes.io/projected/925014bc-f4bf-4589-92de-cdd9abb18bd0-kube-api-access-8sbmf\") on node \"crc\" DevicePath \"\"" Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.701119 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.756664 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "925014bc-f4bf-4589-92de-cdd9abb18bd0" (UID: "925014bc-f4bf-4589-92de-cdd9abb18bd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:41:02 crc kubenswrapper[4793]: I0217 22:41:02.802630 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925014bc-f4bf-4589-92de-cdd9abb18bd0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.027460 4793 generic.go:334] "Generic (PLEG): container finished" podID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerID="d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05" exitCode=0 Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.027518 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerDied","Data":"d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05"} Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.027527 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmtx" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.027558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmtx" event={"ID":"925014bc-f4bf-4589-92de-cdd9abb18bd0","Type":"ContainerDied","Data":"cbf565d5d66a4b23e78cc293cde519f931481b7f295621944a22232d94e98418"} Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.027592 4793 scope.go:117] "RemoveContainer" containerID="d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.075976 4793 scope.go:117] "RemoveContainer" containerID="bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.079358 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkmtx"] Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.098663 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wkmtx"] Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.109170 4793 scope.go:117] "RemoveContainer" containerID="21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.184159 4793 scope.go:117] "RemoveContainer" containerID="d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05" Feb 17 22:41:03 crc kubenswrapper[4793]: E0217 22:41:03.184761 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05\": container with ID starting with d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05 not found: ID does not exist" containerID="d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.184834 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05"} err="failed to get container status \"d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05\": rpc error: code = NotFound desc = could not find container \"d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05\": container with ID starting with d10232cd06dd83eed7fc12cb136f6905a7fc36a0413f7045b02f25dc7f876d05 not found: ID does not exist" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.184874 4793 scope.go:117] "RemoveContainer" containerID="bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49" Feb 17 22:41:03 crc kubenswrapper[4793]: E0217 22:41:03.185401 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49\": container with ID starting with bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49 not found: ID does not exist" containerID="bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.185447 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49"} err="failed to get container status \"bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49\": rpc error: code = NotFound desc = could not find container \"bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49\": container with ID starting with bc70712906cffdfdd6bd1abb1bebbeceaff3496d7af26613a302e3a670a6fa49 not found: ID does not exist" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.185475 4793 scope.go:117] "RemoveContainer" containerID="21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9" Feb 17 22:41:03 crc kubenswrapper[4793]: E0217 22:41:03.186387 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9\": container with ID starting with 21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9 not found: ID does not exist" containerID="21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.186476 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9"} err="failed to get container status \"21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9\": rpc error: code = NotFound desc = could not find container \"21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9\": container with ID starting with 21b34a4e2fdda820ec1ae67f1030d947e1cb01865f5d43832a1ac22930b500f9 not found: ID does not exist" Feb 17 22:41:03 crc kubenswrapper[4793]: I0217 22:41:03.558241 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" path="/var/lib/kubelet/pods/925014bc-f4bf-4589-92de-cdd9abb18bd0/volumes" Feb 17 22:41:12 crc kubenswrapper[4793]: I0217 22:41:12.538913 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:41:12 crc kubenswrapper[4793]: E0217 22:41:12.540318 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:41:13 crc kubenswrapper[4793]: I0217 22:41:13.540904 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:41:13 crc kubenswrapper[4793]: E0217 22:41:13.541452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:41:27 crc kubenswrapper[4793]: I0217 22:41:27.540272 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:41:27 crc kubenswrapper[4793]: E0217 22:41:27.541236 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:41:27 crc kubenswrapper[4793]: I0217 22:41:27.543912 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:41:27 crc kubenswrapper[4793]: E0217 22:41:27.545027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:41:40 crc kubenswrapper[4793]: I0217 22:41:40.539210 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:41:40 crc kubenswrapper[4793]: E0217 22:41:40.540542 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:41:41 crc kubenswrapper[4793]: I0217 22:41:41.538359 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:41:41 crc kubenswrapper[4793]: E0217 22:41:41.538676 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:41:52 crc kubenswrapper[4793]: I0217 22:41:52.539080 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:41:52 crc kubenswrapper[4793]: E0217 22:41:52.539744 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:41:55 crc kubenswrapper[4793]: I0217 22:41:55.545297 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:41:55 crc kubenswrapper[4793]: E0217 22:41:55.546993 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:03 crc kubenswrapper[4793]: I0217 22:42:03.539050 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:42:03 crc kubenswrapper[4793]: E0217 22:42:03.539937 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:42:06 crc kubenswrapper[4793]: I0217 22:42:06.539784 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:42:06 crc kubenswrapper[4793]: E0217 22:42:06.540734 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:15 crc kubenswrapper[4793]: I0217 22:42:15.551084 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:42:15 crc kubenswrapper[4793]: E0217 22:42:15.552036 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:42:18 crc kubenswrapper[4793]: I0217 22:42:18.539929 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:42:18 crc kubenswrapper[4793]: E0217 22:42:18.540681 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:29 crc kubenswrapper[4793]: I0217 22:42:29.539591 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:42:29 crc kubenswrapper[4793]: E0217 22:42:29.540241 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:42:32 crc kubenswrapper[4793]: I0217 22:42:32.540290 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:42:32 crc kubenswrapper[4793]: E0217 22:42:32.541907 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:43 crc kubenswrapper[4793]: I0217 22:42:43.539345 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:42:43 crc kubenswrapper[4793]: E0217 22:42:43.540184 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:42:44 crc kubenswrapper[4793]: I0217 22:42:44.538936 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:42:45 crc kubenswrapper[4793]: I0217 22:42:45.280991 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a"} Feb 17 22:42:45 crc kubenswrapper[4793]: I0217 22:42:45.596601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:42:45 crc kubenswrapper[4793]: I0217 22:42:45.596941 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:42:45 crc kubenswrapper[4793]: I0217 22:42:45.635107 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 17 22:42:46 crc kubenswrapper[4793]: I0217 22:42:46.345425 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 17 22:42:47 crc kubenswrapper[4793]: I0217 22:42:47.310216 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" exitCode=1 Feb 17 22:42:47 crc kubenswrapper[4793]: I0217 22:42:47.310314 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a"} Feb 17 22:42:47 crc kubenswrapper[4793]: I0217 22:42:47.310588 4793 scope.go:117] "RemoveContainer" containerID="d7c71f900202075a85e0222cfb488ef3ec38d5f62c66c52486088404df8de48e" Feb 17 22:42:47 crc kubenswrapper[4793]: I0217 22:42:47.311276 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:42:47 crc kubenswrapper[4793]: E0217 22:42:47.311879 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:48 crc kubenswrapper[4793]: I0217 22:42:48.327459 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:42:48 crc kubenswrapper[4793]: E0217 22:42:48.328080 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:50 crc kubenswrapper[4793]: I0217 22:42:50.596794 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:42:50 crc kubenswrapper[4793]: I0217 22:42:50.598234 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:42:50 crc kubenswrapper[4793]: E0217 22:42:50.598625 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:55 crc kubenswrapper[4793]: I0217 22:42:55.549369 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:42:55 crc kubenswrapper[4793]: E0217 22:42:55.550531 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:42:55 crc kubenswrapper[4793]: I0217 22:42:55.596184 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:42:55 crc kubenswrapper[4793]: I0217 22:42:55.596422 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:42:55 crc kubenswrapper[4793]: I0217 22:42:55.597593 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:42:55 crc kubenswrapper[4793]: E0217 22:42:55.598063 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:42:56 crc kubenswrapper[4793]: I0217 22:42:56.415593 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:42:56 crc kubenswrapper[4793]: E0217 22:42:56.416279 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:43:08 crc kubenswrapper[4793]: I0217 22:43:08.539250 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:43:08 crc kubenswrapper[4793]: I0217 22:43:08.539902 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:43:08 crc kubenswrapper[4793]: E0217 22:43:08.540124 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:43:08 crc kubenswrapper[4793]: E0217 22:43:08.540219 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:43:20 crc kubenswrapper[4793]: I0217 22:43:20.543903 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:43:20 crc kubenswrapper[4793]: E0217 22:43:20.545943 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:43:21 crc kubenswrapper[4793]: I0217 22:43:21.539252 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:43:21 crc kubenswrapper[4793]: E0217 22:43:21.539856 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:43:31 crc kubenswrapper[4793]: I0217 22:43:31.538794 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:43:31 crc kubenswrapper[4793]: E0217 22:43:31.539656 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:43:35 crc kubenswrapper[4793]: I0217 22:43:35.553430 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:43:35 crc kubenswrapper[4793]: E0217 22:43:35.554454 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:43:44 crc kubenswrapper[4793]: I0217 22:43:44.540121 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:43:44 crc kubenswrapper[4793]: E0217 22:43:44.540847 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:43:46 crc kubenswrapper[4793]: I0217 22:43:46.539021 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:43:46 crc kubenswrapper[4793]: E0217 22:43:46.541006 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:43:56 crc kubenswrapper[4793]: I0217 22:43:56.539764 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:43:56 crc kubenswrapper[4793]: E0217 22:43:56.540537 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:43:57 crc kubenswrapper[4793]: I0217 22:43:57.538795 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:43:57 crc kubenswrapper[4793]: E0217 22:43:57.539511 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:44:09 crc kubenswrapper[4793]: I0217 22:44:09.539268 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:44:09 crc kubenswrapper[4793]: E0217 22:44:09.540365 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:44:10 crc kubenswrapper[4793]: I0217 22:44:10.540580 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:44:10 crc kubenswrapper[4793]: E0217 22:44:10.541658 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:44:20 crc kubenswrapper[4793]: I0217 22:44:20.539847 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:44:20 crc kubenswrapper[4793]: E0217 22:44:20.540933 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:44:24 crc kubenswrapper[4793]: I0217 22:44:24.539996 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:44:24 crc kubenswrapper[4793]: E0217 22:44:24.541004 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:44:32 crc kubenswrapper[4793]: I0217 22:44:32.539327 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:44:32 crc kubenswrapper[4793]: E0217 22:44:32.540379 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:44:35 crc kubenswrapper[4793]: I0217 22:44:35.546843 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:44:35 crc kubenswrapper[4793]: E0217 22:44:35.547488 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:44:45 crc kubenswrapper[4793]: I0217 22:44:45.560255 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:44:45 crc kubenswrapper[4793]: E0217 22:44:45.561274 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:44:48 crc kubenswrapper[4793]: I0217 22:44:48.540219 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:44:48 crc kubenswrapper[4793]: E0217 22:44:48.541105 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:44:59 crc kubenswrapper[4793]: I0217 22:44:59.540026 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:44:59 crc kubenswrapper[4793]: E0217 22:44:59.540828 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.174496 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h"] Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.175996 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="registry-server" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.176067 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="registry-server" Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.176135 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="registry-server" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.176149 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="registry-server" Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.176222 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="extract-content" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.176236 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="extract-content" Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.176293 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="extract-content" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.176306 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="extract-content" Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.176350 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="extract-utilities" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.176365 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="extract-utilities" Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.176403 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="extract-utilities" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.176417 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="extract-utilities" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.177736 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="925014bc-f4bf-4589-92de-cdd9abb18bd0" containerName="registry-server" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.177784 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c789ed-a003-41ee-a02e-0df5202113aa" containerName="registry-server" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.180147 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.184218 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.184235 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.204780 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h"] Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.330320 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc894\" (UniqueName: \"kubernetes.io/projected/74cb66a3-a4f9-444a-929d-e1b43431d7fa-kube-api-access-fc894\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.330840 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74cb66a3-a4f9-444a-929d-e1b43431d7fa-secret-volume\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.330883 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cb66a3-a4f9-444a-929d-e1b43431d7fa-config-volume\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.433572 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc894\" (UniqueName: \"kubernetes.io/projected/74cb66a3-a4f9-444a-929d-e1b43431d7fa-kube-api-access-fc894\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.433850 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74cb66a3-a4f9-444a-929d-e1b43431d7fa-secret-volume\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.433922 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cb66a3-a4f9-444a-929d-e1b43431d7fa-config-volume\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.435431 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cb66a3-a4f9-444a-929d-e1b43431d7fa-config-volume\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.446223 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74cb66a3-a4f9-444a-929d-e1b43431d7fa-secret-volume\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.468985 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc894\" (UniqueName: \"kubernetes.io/projected/74cb66a3-a4f9-444a-929d-e1b43431d7fa-kube-api-access-fc894\") pod \"collect-profiles-29522805-wxn8h\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.539265 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:45:00 crc kubenswrapper[4793]: E0217 22:45:00.540040 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:45:00 crc kubenswrapper[4793]: I0217 22:45:00.545465 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:01 crc kubenswrapper[4793]: I0217 22:45:01.036470 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h"] Feb 17 22:45:01 crc kubenswrapper[4793]: I0217 22:45:01.899068 4793 generic.go:334] "Generic (PLEG): container finished" podID="74cb66a3-a4f9-444a-929d-e1b43431d7fa" containerID="20fe3242dcfe50045eed7cd3eb71e27bf8d05e3688927a2fbccb67cdc19bdc62" exitCode=0 Feb 17 22:45:01 crc kubenswrapper[4793]: I0217 22:45:01.899276 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" event={"ID":"74cb66a3-a4f9-444a-929d-e1b43431d7fa","Type":"ContainerDied","Data":"20fe3242dcfe50045eed7cd3eb71e27bf8d05e3688927a2fbccb67cdc19bdc62"} Feb 17 22:45:01 crc kubenswrapper[4793]: I0217 22:45:01.899435 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" event={"ID":"74cb66a3-a4f9-444a-929d-e1b43431d7fa","Type":"ContainerStarted","Data":"ae7b155be57edfb1a8f0c4efe765b20081bcc618b27a3d5877ca6a1d728d6e5e"} Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.349846 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.401090 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74cb66a3-a4f9-444a-929d-e1b43431d7fa-secret-volume\") pod \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.401203 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cb66a3-a4f9-444a-929d-e1b43431d7fa-config-volume\") pod \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.401278 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc894\" (UniqueName: \"kubernetes.io/projected/74cb66a3-a4f9-444a-929d-e1b43431d7fa-kube-api-access-fc894\") pod \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\" (UID: \"74cb66a3-a4f9-444a-929d-e1b43431d7fa\") " Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.402990 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cb66a3-a4f9-444a-929d-e1b43431d7fa-config-volume" (OuterVolumeSpecName: "config-volume") pod "74cb66a3-a4f9-444a-929d-e1b43431d7fa" (UID: "74cb66a3-a4f9-444a-929d-e1b43431d7fa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.407253 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74cb66a3-a4f9-444a-929d-e1b43431d7fa-kube-api-access-fc894" (OuterVolumeSpecName: "kube-api-access-fc894") pod "74cb66a3-a4f9-444a-929d-e1b43431d7fa" (UID: "74cb66a3-a4f9-444a-929d-e1b43431d7fa"). InnerVolumeSpecName "kube-api-access-fc894". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.416846 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74cb66a3-a4f9-444a-929d-e1b43431d7fa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74cb66a3-a4f9-444a-929d-e1b43431d7fa" (UID: "74cb66a3-a4f9-444a-929d-e1b43431d7fa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.502921 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74cb66a3-a4f9-444a-929d-e1b43431d7fa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.502951 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cb66a3-a4f9-444a-929d-e1b43431d7fa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.502962 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc894\" (UniqueName: \"kubernetes.io/projected/74cb66a3-a4f9-444a-929d-e1b43431d7fa-kube-api-access-fc894\") on node \"crc\" DevicePath \"\"" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.924893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" event={"ID":"74cb66a3-a4f9-444a-929d-e1b43431d7fa","Type":"ContainerDied","Data":"ae7b155be57edfb1a8f0c4efe765b20081bcc618b27a3d5877ca6a1d728d6e5e"} Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.924957 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7b155be57edfb1a8f0c4efe765b20081bcc618b27a3d5877ca6a1d728d6e5e" Feb 17 22:45:03 crc kubenswrapper[4793]: I0217 22:45:03.924996 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522805-wxn8h" Feb 17 22:45:04 crc kubenswrapper[4793]: I0217 22:45:04.432471 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s"] Feb 17 22:45:04 crc kubenswrapper[4793]: I0217 22:45:04.440953 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522760-p9f5s"] Feb 17 22:45:05 crc kubenswrapper[4793]: I0217 22:45:05.554414 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9ee751-ab7e-454d-9b2c-5d87483286cb" path="/var/lib/kubelet/pods/2b9ee751-ab7e-454d-9b2c-5d87483286cb/volumes" Feb 17 22:45:11 crc kubenswrapper[4793]: I0217 22:45:11.538980 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:45:11 crc kubenswrapper[4793]: E0217 22:45:11.541052 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:45:12 crc kubenswrapper[4793]: I0217 22:45:12.539145 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:45:12 crc kubenswrapper[4793]: E0217 22:45:12.539894 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.784576 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njk4p"] Feb 17 22:45:18 crc kubenswrapper[4793]: E0217 22:45:18.785552 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cb66a3-a4f9-444a-929d-e1b43431d7fa" containerName="collect-profiles" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.785567 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cb66a3-a4f9-444a-929d-e1b43431d7fa" containerName="collect-profiles" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.785887 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="74cb66a3-a4f9-444a-929d-e1b43431d7fa" containerName="collect-profiles" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.795386 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.804814 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njk4p"] Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.888725 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-utilities\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.889080 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-catalog-content\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.889148 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnw6q\" (UniqueName: \"kubernetes.io/projected/8ff908ea-c18b-42a8-a27b-1096edd90567-kube-api-access-mnw6q\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.991202 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-catalog-content\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.991271 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnw6q\" (UniqueName: \"kubernetes.io/projected/8ff908ea-c18b-42a8-a27b-1096edd90567-kube-api-access-mnw6q\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.991341 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-utilities\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.991956 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-catalog-content\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:18 crc kubenswrapper[4793]: I0217 22:45:18.991979 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-utilities\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:19 crc kubenswrapper[4793]: I0217 22:45:19.015653 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnw6q\" (UniqueName: \"kubernetes.io/projected/8ff908ea-c18b-42a8-a27b-1096edd90567-kube-api-access-mnw6q\") pod \"certified-operators-njk4p\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:19 crc kubenswrapper[4793]: I0217 22:45:19.122194 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:19 crc kubenswrapper[4793]: I0217 22:45:19.657625 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njk4p"] Feb 17 22:45:20 crc kubenswrapper[4793]: I0217 22:45:20.123839 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerID="a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573" exitCode=0 Feb 17 22:45:20 crc kubenswrapper[4793]: I0217 22:45:20.124010 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerDied","Data":"a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573"} Feb 17 22:45:20 crc kubenswrapper[4793]: I0217 22:45:20.124248 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerStarted","Data":"e640aed91ee3bb76eb88c68e2a62a43f2b2b56a49e5c6a8ab068fa276a0408db"} Feb 17 22:45:20 crc kubenswrapper[4793]: I0217 22:45:20.126744 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:45:21 crc kubenswrapper[4793]: I0217 22:45:21.140307 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerStarted","Data":"4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a"} Feb 17 22:45:22 crc kubenswrapper[4793]: I0217 22:45:22.157070 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerID="4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a" exitCode=0 Feb 17 22:45:22 crc kubenswrapper[4793]: I0217 22:45:22.157199 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerDied","Data":"4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a"} Feb 17 22:45:23 crc kubenswrapper[4793]: I0217 22:45:23.170325 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerStarted","Data":"8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349"} Feb 17 22:45:23 crc kubenswrapper[4793]: I0217 22:45:23.213210 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njk4p" podStartSLOduration=2.770269376 podStartE2EDuration="5.21319278s" podCreationTimestamp="2026-02-17 22:45:18 +0000 UTC" firstStartedPulling="2026-02-17 22:45:20.126345579 +0000 UTC m=+9395.418043930" lastFinishedPulling="2026-02-17 22:45:22.569268993 +0000 UTC m=+9397.860967334" observedRunningTime="2026-02-17 22:45:23.196574459 +0000 UTC m=+9398.488272810" watchObservedRunningTime="2026-02-17 22:45:23.21319278 +0000 UTC m=+9398.504891091" Feb 17 22:45:26 crc kubenswrapper[4793]: I0217 22:45:26.538640 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:45:26 crc kubenswrapper[4793]: I0217 22:45:26.539156 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:45:26 crc kubenswrapper[4793]: E0217 22:45:26.539389 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:45:27 crc kubenswrapper[4793]: I0217 22:45:27.218298 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"58da4d84549c51f12bb43efcca617dbf8f3f6da534d72c98982539a31c802d41"} Feb 17 22:45:27 crc kubenswrapper[4793]: I0217 22:45:27.663554 4793 scope.go:117] "RemoveContainer" containerID="4227f383a2b7df4a92c54e6b8a121533513b8f35ec7fbb83d73a6b12777f11ee" Feb 17 22:45:29 crc kubenswrapper[4793]: I0217 22:45:29.122569 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:29 crc kubenswrapper[4793]: I0217 22:45:29.123244 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:29 crc kubenswrapper[4793]: I0217 22:45:29.181966 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:29 crc kubenswrapper[4793]: I0217 22:45:29.309284 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:29 crc kubenswrapper[4793]: I0217 22:45:29.434485 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njk4p"] Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.277458 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njk4p" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="registry-server" containerID="cri-o://8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349" gracePeriod=2 Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.790684 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.919154 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-catalog-content\") pod \"8ff908ea-c18b-42a8-a27b-1096edd90567\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.920494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-utilities\") pod \"8ff908ea-c18b-42a8-a27b-1096edd90567\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.920719 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnw6q\" (UniqueName: \"kubernetes.io/projected/8ff908ea-c18b-42a8-a27b-1096edd90567-kube-api-access-mnw6q\") pod \"8ff908ea-c18b-42a8-a27b-1096edd90567\" (UID: \"8ff908ea-c18b-42a8-a27b-1096edd90567\") " Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.921248 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-utilities" (OuterVolumeSpecName: "utilities") pod "8ff908ea-c18b-42a8-a27b-1096edd90567" (UID: "8ff908ea-c18b-42a8-a27b-1096edd90567"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.921646 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.932717 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff908ea-c18b-42a8-a27b-1096edd90567-kube-api-access-mnw6q" (OuterVolumeSpecName: "kube-api-access-mnw6q") pod "8ff908ea-c18b-42a8-a27b-1096edd90567" (UID: "8ff908ea-c18b-42a8-a27b-1096edd90567"). InnerVolumeSpecName "kube-api-access-mnw6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:45:31 crc kubenswrapper[4793]: I0217 22:45:31.979022 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ff908ea-c18b-42a8-a27b-1096edd90567" (UID: "8ff908ea-c18b-42a8-a27b-1096edd90567"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.023820 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff908ea-c18b-42a8-a27b-1096edd90567-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.023855 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnw6q\" (UniqueName: \"kubernetes.io/projected/8ff908ea-c18b-42a8-a27b-1096edd90567-kube-api-access-mnw6q\") on node \"crc\" DevicePath \"\"" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.289355 4793 generic.go:334] "Generic (PLEG): container finished" podID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerID="8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349" exitCode=0 Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.289412 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njk4p" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.289414 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerDied","Data":"8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349"} Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.289543 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njk4p" event={"ID":"8ff908ea-c18b-42a8-a27b-1096edd90567","Type":"ContainerDied","Data":"e640aed91ee3bb76eb88c68e2a62a43f2b2b56a49e5c6a8ab068fa276a0408db"} Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.289571 4793 scope.go:117] "RemoveContainer" containerID="8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.322246 4793 scope.go:117] "RemoveContainer" containerID="4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.332395 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njk4p"] Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.341628 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njk4p"] Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.357772 4793 scope.go:117] "RemoveContainer" containerID="a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.431644 4793 scope.go:117] "RemoveContainer" containerID="8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349" Feb 17 22:45:32 crc kubenswrapper[4793]: E0217 22:45:32.432534 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349\": container with ID starting with 8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349 not found: ID does not exist" containerID="8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.432586 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349"} err="failed to get container status \"8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349\": rpc error: code = NotFound desc = could not find container \"8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349\": container with ID starting with 8990ee404733d72288f2ebf1faccdc6d4fe1de67a6869cc7ca3b43d66e372349 not found: ID does not exist" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.432617 4793 scope.go:117] "RemoveContainer" containerID="4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a" Feb 17 22:45:32 crc kubenswrapper[4793]: E0217 22:45:32.433082 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a\": container with ID starting with 4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a not found: ID does not exist" containerID="4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.433110 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a"} err="failed to get container status \"4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a\": rpc error: code = NotFound desc = could not find container \"4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a\": container with ID starting with 4ce5e6c1d1c35c57cee8afec0fe79b5f2a13fba42983130b0bb88b691deac55a not found: ID does not exist" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.433131 4793 scope.go:117] "RemoveContainer" containerID="a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573" Feb 17 22:45:32 crc kubenswrapper[4793]: E0217 22:45:32.433394 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573\": container with ID starting with a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573 not found: ID does not exist" containerID="a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573" Feb 17 22:45:32 crc kubenswrapper[4793]: I0217 22:45:32.433418 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573"} err="failed to get container status \"a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573\": rpc error: code = NotFound desc = could not find container \"a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573\": container with ID starting with a8d495d6ce9aadedc1116e4b4a80815e2ec4c3630f86634b3d954ab0f6d0c573 not found: ID does not exist" Feb 17 22:45:33 crc kubenswrapper[4793]: I0217 22:45:33.557304 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" path="/var/lib/kubelet/pods/8ff908ea-c18b-42a8-a27b-1096edd90567/volumes" Feb 17 22:45:40 crc kubenswrapper[4793]: I0217 22:45:40.539085 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:45:40 crc kubenswrapper[4793]: E0217 22:45:40.540071 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:45:51 crc kubenswrapper[4793]: I0217 22:45:51.540016 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:45:51 crc kubenswrapper[4793]: E0217 22:45:51.541489 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:46:03 crc kubenswrapper[4793]: I0217 22:46:03.539605 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:46:03 crc kubenswrapper[4793]: E0217 22:46:03.540823 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:46:17 crc kubenswrapper[4793]: I0217 22:46:17.539018 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:46:17 crc kubenswrapper[4793]: E0217 22:46:17.541020 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:46:32 crc kubenswrapper[4793]: I0217 22:46:32.539624 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:46:32 crc kubenswrapper[4793]: E0217 22:46:32.540863 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:46:44 crc kubenswrapper[4793]: I0217 22:46:44.538816 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:46:44 crc kubenswrapper[4793]: E0217 22:46:44.539743 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:46:55 crc kubenswrapper[4793]: I0217 22:46:55.558276 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:46:55 crc kubenswrapper[4793]: E0217 22:46:55.559190 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:47:08 crc kubenswrapper[4793]: I0217 22:47:08.540282 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:47:08 crc kubenswrapper[4793]: E0217 22:47:08.541381 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:47:20 crc kubenswrapper[4793]: I0217 22:47:20.538862 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:47:20 crc kubenswrapper[4793]: E0217 22:47:20.539974 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:47:33 crc kubenswrapper[4793]: I0217 22:47:33.539300 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:47:33 crc kubenswrapper[4793]: E0217 22:47:33.541669 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:47:45 crc kubenswrapper[4793]: I0217 22:47:45.559164 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:47:45 crc kubenswrapper[4793]: E0217 22:47:45.561013 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:47:50 crc kubenswrapper[4793]: I0217 22:47:50.102565 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:47:50 crc kubenswrapper[4793]: I0217 22:47:50.103247 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:47:57 crc kubenswrapper[4793]: I0217 22:47:57.538833 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:47:57 crc kubenswrapper[4793]: I0217 22:47:57.954347 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70"} Feb 17 22:48:00 crc kubenswrapper[4793]: I0217 22:48:00.596518 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:48:01 crc kubenswrapper[4793]: I0217 22:48:01.011765 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" exitCode=1 Feb 17 22:48:01 crc kubenswrapper[4793]: I0217 22:48:01.011853 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70"} Feb 17 22:48:01 crc kubenswrapper[4793]: I0217 22:48:01.011962 4793 scope.go:117] "RemoveContainer" containerID="7effb428b37a85fa107ddb95c517f4a4da7a81b8a457012cb359afa8a628e83a" Feb 17 22:48:01 crc kubenswrapper[4793]: I0217 22:48:01.013790 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:48:01 crc kubenswrapper[4793]: E0217 22:48:01.014374 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:48:05 crc kubenswrapper[4793]: I0217 22:48:05.596005 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:48:05 crc kubenswrapper[4793]: I0217 22:48:05.596624 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:48:05 crc kubenswrapper[4793]: I0217 22:48:05.596657 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:48:05 crc kubenswrapper[4793]: I0217 22:48:05.597596 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:48:05 crc kubenswrapper[4793]: E0217 22:48:05.597997 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:48:20 crc kubenswrapper[4793]: I0217 22:48:20.102419 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:48:20 crc kubenswrapper[4793]: I0217 22:48:20.103358 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:48:20 crc kubenswrapper[4793]: I0217 22:48:20.538614 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:48:20 crc kubenswrapper[4793]: E0217 22:48:20.538888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:48:32 crc kubenswrapper[4793]: I0217 22:48:32.539332 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:48:32 crc kubenswrapper[4793]: E0217 22:48:32.540317 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:48:44 crc kubenswrapper[4793]: I0217 22:48:44.539319 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:48:44 crc kubenswrapper[4793]: E0217 22:48:44.540285 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.102345 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.104489 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.104741 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.106022 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58da4d84549c51f12bb43efcca617dbf8f3f6da534d72c98982539a31c802d41"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.106308 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://58da4d84549c51f12bb43efcca617dbf8f3f6da534d72c98982539a31c802d41" gracePeriod=600 Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.596064 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="58da4d84549c51f12bb43efcca617dbf8f3f6da534d72c98982539a31c802d41" exitCode=0 Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.596185 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"58da4d84549c51f12bb43efcca617dbf8f3f6da534d72c98982539a31c802d41"} Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.596742 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97"} Feb 17 22:48:50 crc kubenswrapper[4793]: I0217 22:48:50.596784 4793 scope.go:117] "RemoveContainer" containerID="0c74a9423aa40029070fceeebe682dd9638e72ccd82a62797ae836d88e125688" Feb 17 22:48:55 crc kubenswrapper[4793]: I0217 22:48:55.539924 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:48:55 crc kubenswrapper[4793]: E0217 22:48:55.541146 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:49:10 crc kubenswrapper[4793]: I0217 22:49:10.539162 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:49:10 crc kubenswrapper[4793]: E0217 22:49:10.540340 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:49:25 crc kubenswrapper[4793]: I0217 22:49:25.547516 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:49:25 crc kubenswrapper[4793]: E0217 22:49:25.548324 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:49:40 crc kubenswrapper[4793]: I0217 22:49:40.539684 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:49:40 crc kubenswrapper[4793]: E0217 22:49:40.540812 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:49:56 crc kubenswrapper[4793]: I0217 22:49:56.539420 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:49:56 crc kubenswrapper[4793]: E0217 22:49:56.540903 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:50:11 crc kubenswrapper[4793]: I0217 22:50:11.539641 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:50:11 crc kubenswrapper[4793]: E0217 22:50:11.540612 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.516181 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5b5mw"] Feb 17 22:50:22 crc kubenswrapper[4793]: E0217 22:50:22.517289 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="extract-content" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.517307 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="extract-content" Feb 17 22:50:22 crc kubenswrapper[4793]: E0217 22:50:22.517387 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="registry-server" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.517396 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="registry-server" Feb 17 22:50:22 crc kubenswrapper[4793]: E0217 22:50:22.517415 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="extract-utilities" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.517423 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="extract-utilities" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.517728 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff908ea-c18b-42a8-a27b-1096edd90567" containerName="registry-server" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.519627 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.536406 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5b5mw"] Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.539328 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:50:22 crc kubenswrapper[4793]: E0217 22:50:22.539546 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.690683 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-utilities\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.690858 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-catalog-content\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.691041 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrzw\" (UniqueName: \"kubernetes.io/projected/4d6eab2b-77b5-4385-90d7-f389d3de3b04-kube-api-access-ncrzw\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.792586 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrzw\" (UniqueName: \"kubernetes.io/projected/4d6eab2b-77b5-4385-90d7-f389d3de3b04-kube-api-access-ncrzw\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.793009 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-utilities\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.793405 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-catalog-content\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.793610 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-utilities\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.793816 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-catalog-content\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.828129 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrzw\" (UniqueName: \"kubernetes.io/projected/4d6eab2b-77b5-4385-90d7-f389d3de3b04-kube-api-access-ncrzw\") pod \"redhat-marketplace-5b5mw\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:22 crc kubenswrapper[4793]: I0217 22:50:22.855924 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:23 crc kubenswrapper[4793]: I0217 22:50:23.338089 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5b5mw"] Feb 17 22:50:23 crc kubenswrapper[4793]: I0217 22:50:23.704881 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerID="a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908" exitCode=0 Feb 17 22:50:23 crc kubenswrapper[4793]: I0217 22:50:23.705430 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerDied","Data":"a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908"} Feb 17 22:50:23 crc kubenswrapper[4793]: I0217 22:50:23.705470 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerStarted","Data":"01269cd18fcc799393fcf6fdc21de7a14ebe1e674835b150f506aebdb9baaade"} Feb 17 22:50:23 crc kubenswrapper[4793]: I0217 22:50:23.707921 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:50:25 crc kubenswrapper[4793]: I0217 22:50:25.730631 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerStarted","Data":"2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf"} Feb 17 22:50:26 crc kubenswrapper[4793]: I0217 22:50:26.742005 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerID="2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf" exitCode=0 Feb 17 22:50:26 crc kubenswrapper[4793]: I0217 22:50:26.742049 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerDied","Data":"2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf"} Feb 17 22:50:27 crc kubenswrapper[4793]: I0217 22:50:27.757441 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerStarted","Data":"9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0"} Feb 17 22:50:27 crc kubenswrapper[4793]: I0217 22:50:27.781981 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5b5mw" podStartSLOduration=2.317468382 podStartE2EDuration="5.781957486s" podCreationTimestamp="2026-02-17 22:50:22 +0000 UTC" firstStartedPulling="2026-02-17 22:50:23.707557287 +0000 UTC m=+9698.999255608" lastFinishedPulling="2026-02-17 22:50:27.172046391 +0000 UTC m=+9702.463744712" observedRunningTime="2026-02-17 22:50:27.776001559 +0000 UTC m=+9703.067699910" watchObservedRunningTime="2026-02-17 22:50:27.781957486 +0000 UTC m=+9703.073655807" Feb 17 22:50:32 crc kubenswrapper[4793]: I0217 22:50:32.858290 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:32 crc kubenswrapper[4793]: I0217 22:50:32.858955 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:32 crc kubenswrapper[4793]: I0217 22:50:32.938196 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:33 crc kubenswrapper[4793]: I0217 22:50:33.912757 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:33 crc kubenswrapper[4793]: I0217 22:50:33.974044 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5b5mw"] Feb 17 22:50:35 crc kubenswrapper[4793]: I0217 22:50:35.547882 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:50:35 crc kubenswrapper[4793]: E0217 22:50:35.548614 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:50:35 crc kubenswrapper[4793]: I0217 22:50:35.857037 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5b5mw" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="registry-server" containerID="cri-o://9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0" gracePeriod=2 Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.314792 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.400287 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-utilities\") pod \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.400394 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-catalog-content\") pod \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.400503 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrzw\" (UniqueName: \"kubernetes.io/projected/4d6eab2b-77b5-4385-90d7-f389d3de3b04-kube-api-access-ncrzw\") pod \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\" (UID: \"4d6eab2b-77b5-4385-90d7-f389d3de3b04\") " Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.401063 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-utilities" (OuterVolumeSpecName: "utilities") pod "4d6eab2b-77b5-4385-90d7-f389d3de3b04" (UID: "4d6eab2b-77b5-4385-90d7-f389d3de3b04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.401370 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.407112 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6eab2b-77b5-4385-90d7-f389d3de3b04-kube-api-access-ncrzw" (OuterVolumeSpecName: "kube-api-access-ncrzw") pod "4d6eab2b-77b5-4385-90d7-f389d3de3b04" (UID: "4d6eab2b-77b5-4385-90d7-f389d3de3b04"). InnerVolumeSpecName "kube-api-access-ncrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.426990 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d6eab2b-77b5-4385-90d7-f389d3de3b04" (UID: "4d6eab2b-77b5-4385-90d7-f389d3de3b04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.503533 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d6eab2b-77b5-4385-90d7-f389d3de3b04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.503568 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncrzw\" (UniqueName: \"kubernetes.io/projected/4d6eab2b-77b5-4385-90d7-f389d3de3b04-kube-api-access-ncrzw\") on node \"crc\" DevicePath \"\"" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.870941 4793 generic.go:334] "Generic (PLEG): container finished" podID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerID="9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0" exitCode=0 Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.870984 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerDied","Data":"9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0"} Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.871037 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5b5mw" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.871082 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5b5mw" event={"ID":"4d6eab2b-77b5-4385-90d7-f389d3de3b04","Type":"ContainerDied","Data":"01269cd18fcc799393fcf6fdc21de7a14ebe1e674835b150f506aebdb9baaade"} Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.871111 4793 scope.go:117] "RemoveContainer" containerID="9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.907951 4793 scope.go:117] "RemoveContainer" containerID="2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf" Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.930429 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5b5mw"] Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.940962 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5b5mw"] Feb 17 22:50:36 crc kubenswrapper[4793]: I0217 22:50:36.948553 4793 scope.go:117] "RemoveContainer" containerID="a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.025582 4793 scope.go:117] "RemoveContainer" containerID="9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0" Feb 17 22:50:37 crc kubenswrapper[4793]: E0217 22:50:37.026133 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0\": container with ID starting with 9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0 not found: ID does not exist" containerID="9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.026203 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0"} err="failed to get container status \"9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0\": rpc error: code = NotFound desc = could not find container \"9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0\": container with ID starting with 9ac7ba0509552599fe72237978014064d884ac36e471fb97c6e4f5e9abb1fdb0 not found: ID does not exist" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.026243 4793 scope.go:117] "RemoveContainer" containerID="2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf" Feb 17 22:50:37 crc kubenswrapper[4793]: E0217 22:50:37.026849 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf\": container with ID starting with 2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf not found: ID does not exist" containerID="2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.026896 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf"} err="failed to get container status \"2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf\": rpc error: code = NotFound desc = could not find container \"2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf\": container with ID starting with 2350c1314f625f757057481fb0e7b0b56fd1013883a9bb9ffcc27b237261cdcf not found: ID does not exist" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.026928 4793 scope.go:117] "RemoveContainer" containerID="a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908" Feb 17 22:50:37 crc kubenswrapper[4793]: E0217 22:50:37.027267 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908\": container with ID starting with a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908 not found: ID does not exist" containerID="a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.027380 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908"} err="failed to get container status \"a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908\": rpc error: code = NotFound desc = could not find container \"a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908\": container with ID starting with a4c4d659af7d4e6ddd4ded079608154bc9069362dce951fd306f1e4aac66e908 not found: ID does not exist" Feb 17 22:50:37 crc kubenswrapper[4793]: I0217 22:50:37.562449 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" path="/var/lib/kubelet/pods/4d6eab2b-77b5-4385-90d7-f389d3de3b04/volumes" Feb 17 22:50:46 crc kubenswrapper[4793]: I0217 22:50:46.538887 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:50:46 crc kubenswrapper[4793]: E0217 22:50:46.539982 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:50:50 crc kubenswrapper[4793]: I0217 22:50:50.102471 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:50:50 crc kubenswrapper[4793]: I0217 22:50:50.103226 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:50:59 crc kubenswrapper[4793]: I0217 22:50:59.540369 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:50:59 crc kubenswrapper[4793]: E0217 22:50:59.543463 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.348795 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xspmx"] Feb 17 22:51:12 crc kubenswrapper[4793]: E0217 22:51:12.350269 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="registry-server" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.350300 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="registry-server" Feb 17 22:51:12 crc kubenswrapper[4793]: E0217 22:51:12.350368 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="extract-content" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.350389 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="extract-content" Feb 17 22:51:12 crc kubenswrapper[4793]: E0217 22:51:12.350455 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="extract-utilities" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.350477 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="extract-utilities" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.351098 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6eab2b-77b5-4385-90d7-f389d3de3b04" containerName="registry-server" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.354223 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.382515 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xspmx"] Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.463508 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrg7\" (UniqueName: \"kubernetes.io/projected/841a5c4c-1259-458e-81d9-dcfad8716734-kube-api-access-6rrg7\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.463665 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-utilities\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.463818 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-catalog-content\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.565312 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-catalog-content\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.565461 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrg7\" (UniqueName: \"kubernetes.io/projected/841a5c4c-1259-458e-81d9-dcfad8716734-kube-api-access-6rrg7\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.565483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-utilities\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.565970 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-utilities\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.566091 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-catalog-content\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.604062 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrg7\" (UniqueName: \"kubernetes.io/projected/841a5c4c-1259-458e-81d9-dcfad8716734-kube-api-access-6rrg7\") pod \"community-operators-xspmx\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:12 crc kubenswrapper[4793]: I0217 22:51:12.685772 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:13 crc kubenswrapper[4793]: I0217 22:51:13.293175 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xspmx"] Feb 17 22:51:14 crc kubenswrapper[4793]: I0217 22:51:14.318388 4793 generic.go:334] "Generic (PLEG): container finished" podID="841a5c4c-1259-458e-81d9-dcfad8716734" containerID="07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5" exitCode=0 Feb 17 22:51:14 crc kubenswrapper[4793]: I0217 22:51:14.318458 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerDied","Data":"07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5"} Feb 17 22:51:14 crc kubenswrapper[4793]: I0217 22:51:14.318821 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerStarted","Data":"2353b895baa739e8fb50b3c710e7b31ab417097b106523e96cbf2f53e8eea4b0"} Feb 17 22:51:14 crc kubenswrapper[4793]: I0217 22:51:14.539366 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:51:14 crc kubenswrapper[4793]: E0217 22:51:14.539624 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:51:16 crc kubenswrapper[4793]: I0217 22:51:16.350516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerStarted","Data":"12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693"} Feb 17 22:51:17 crc kubenswrapper[4793]: I0217 22:51:17.367379 4793 generic.go:334] "Generic (PLEG): container finished" podID="841a5c4c-1259-458e-81d9-dcfad8716734" containerID="12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693" exitCode=0 Feb 17 22:51:17 crc kubenswrapper[4793]: I0217 22:51:17.367484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerDied","Data":"12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693"} Feb 17 22:51:18 crc kubenswrapper[4793]: I0217 22:51:18.384884 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerStarted","Data":"f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161"} Feb 17 22:51:18 crc kubenswrapper[4793]: I0217 22:51:18.428300 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xspmx" podStartSLOduration=2.967418138 podStartE2EDuration="6.428275253s" podCreationTimestamp="2026-02-17 22:51:12 +0000 UTC" firstStartedPulling="2026-02-17 22:51:14.321602945 +0000 UTC m=+9749.613301256" lastFinishedPulling="2026-02-17 22:51:17.78246005 +0000 UTC m=+9753.074158371" observedRunningTime="2026-02-17 22:51:18.412604925 +0000 UTC m=+9753.704303276" watchObservedRunningTime="2026-02-17 22:51:18.428275253 +0000 UTC m=+9753.719973574" Feb 17 22:51:20 crc kubenswrapper[4793]: I0217 22:51:20.101816 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:51:20 crc kubenswrapper[4793]: I0217 22:51:20.102260 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:51:22 crc kubenswrapper[4793]: I0217 22:51:22.686771 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:22 crc kubenswrapper[4793]: I0217 22:51:22.688662 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:23 crc kubenswrapper[4793]: I0217 22:51:23.560605 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:24 crc kubenswrapper[4793]: I0217 22:51:24.517630 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:24 crc kubenswrapper[4793]: I0217 22:51:24.577893 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xspmx"] Feb 17 22:51:26 crc kubenswrapper[4793]: I0217 22:51:26.487435 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xspmx" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="registry-server" containerID="cri-o://f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161" gracePeriod=2 Feb 17 22:51:26 crc kubenswrapper[4793]: I0217 22:51:26.539235 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:51:26 crc kubenswrapper[4793]: E0217 22:51:26.539804 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.040603 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.107830 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-catalog-content\") pod \"841a5c4c-1259-458e-81d9-dcfad8716734\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.108220 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrg7\" (UniqueName: \"kubernetes.io/projected/841a5c4c-1259-458e-81d9-dcfad8716734-kube-api-access-6rrg7\") pod \"841a5c4c-1259-458e-81d9-dcfad8716734\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.108340 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-utilities\") pod \"841a5c4c-1259-458e-81d9-dcfad8716734\" (UID: \"841a5c4c-1259-458e-81d9-dcfad8716734\") " Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.110040 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-utilities" (OuterVolumeSpecName: "utilities") pod "841a5c4c-1259-458e-81d9-dcfad8716734" (UID: "841a5c4c-1259-458e-81d9-dcfad8716734"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.120967 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841a5c4c-1259-458e-81d9-dcfad8716734-kube-api-access-6rrg7" (OuterVolumeSpecName: "kube-api-access-6rrg7") pod "841a5c4c-1259-458e-81d9-dcfad8716734" (UID: "841a5c4c-1259-458e-81d9-dcfad8716734"). InnerVolumeSpecName "kube-api-access-6rrg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.168116 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "841a5c4c-1259-458e-81d9-dcfad8716734" (UID: "841a5c4c-1259-458e-81d9-dcfad8716734"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.211277 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.211318 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rrg7\" (UniqueName: \"kubernetes.io/projected/841a5c4c-1259-458e-81d9-dcfad8716734-kube-api-access-6rrg7\") on node \"crc\" DevicePath \"\"" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.211332 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/841a5c4c-1259-458e-81d9-dcfad8716734-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.507741 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerDied","Data":"f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161"} Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.507798 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xspmx" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.507820 4793 scope.go:117] "RemoveContainer" containerID="f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.507679 4793 generic.go:334] "Generic (PLEG): container finished" podID="841a5c4c-1259-458e-81d9-dcfad8716734" containerID="f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161" exitCode=0 Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.507893 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xspmx" event={"ID":"841a5c4c-1259-458e-81d9-dcfad8716734","Type":"ContainerDied","Data":"2353b895baa739e8fb50b3c710e7b31ab417097b106523e96cbf2f53e8eea4b0"} Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.531661 4793 scope.go:117] "RemoveContainer" containerID="12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.571922 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xspmx"] Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.581033 4793 scope.go:117] "RemoveContainer" containerID="07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.582814 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xspmx"] Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.622585 4793 scope.go:117] "RemoveContainer" containerID="f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161" Feb 17 22:51:27 crc kubenswrapper[4793]: E0217 22:51:27.623208 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161\": container with ID starting with f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161 not found: ID does not exist" containerID="f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.623376 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161"} err="failed to get container status \"f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161\": rpc error: code = NotFound desc = could not find container \"f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161\": container with ID starting with f6201bf5b26485b19d3c7b949acc0fe9c2b9955f0200de0e6f6a9bc84cb24161 not found: ID does not exist" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.623404 4793 scope.go:117] "RemoveContainer" containerID="12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693" Feb 17 22:51:27 crc kubenswrapper[4793]: E0217 22:51:27.623678 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693\": container with ID starting with 12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693 not found: ID does not exist" containerID="12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.623724 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693"} err="failed to get container status \"12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693\": rpc error: code = NotFound desc = could not find container \"12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693\": container with ID starting with 12b39a573ba04a843f83dedad2590dae9e87e6e9f927ee63737c7aca6691a693 not found: ID does not exist" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.623740 4793 scope.go:117] "RemoveContainer" containerID="07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5" Feb 17 22:51:27 crc kubenswrapper[4793]: E0217 22:51:27.623946 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5\": container with ID starting with 07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5 not found: ID does not exist" containerID="07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5" Feb 17 22:51:27 crc kubenswrapper[4793]: I0217 22:51:27.624021 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5"} err="failed to get container status \"07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5\": rpc error: code = NotFound desc = could not find container \"07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5\": container with ID starting with 07056db5303bcf719cbdcf4155eb59a7a9c46da7b576012b9082e728254431a5 not found: ID does not exist" Feb 17 22:51:29 crc kubenswrapper[4793]: I0217 22:51:29.558641 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" path="/var/lib/kubelet/pods/841a5c4c-1259-458e-81d9-dcfad8716734/volumes" Feb 17 22:51:41 crc kubenswrapper[4793]: I0217 22:51:41.538631 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:51:41 crc kubenswrapper[4793]: E0217 22:51:41.539509 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.101863 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.102673 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.102780 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.103900 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.104007 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" gracePeriod=600 Feb 17 22:51:50 crc kubenswrapper[4793]: E0217 22:51:50.237860 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.790736 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" exitCode=0 Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.790888 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97"} Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.791562 4793 scope.go:117] "RemoveContainer" containerID="58da4d84549c51f12bb43efcca617dbf8f3f6da534d72c98982539a31c802d41" Feb 17 22:51:50 crc kubenswrapper[4793]: I0217 22:51:50.792382 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:51:50 crc kubenswrapper[4793]: E0217 22:51:50.793031 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:51:55 crc kubenswrapper[4793]: I0217 22:51:55.551268 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:51:55 crc kubenswrapper[4793]: E0217 22:51:55.552532 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:52:05 crc kubenswrapper[4793]: I0217 22:52:05.548448 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:52:05 crc kubenswrapper[4793]: E0217 22:52:05.550191 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:52:10 crc kubenswrapper[4793]: I0217 22:52:10.539065 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:52:10 crc kubenswrapper[4793]: E0217 22:52:10.540305 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:52:19 crc kubenswrapper[4793]: I0217 22:52:19.539426 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:52:19 crc kubenswrapper[4793]: E0217 22:52:19.540480 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:52:22 crc kubenswrapper[4793]: I0217 22:52:22.540739 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:52:22 crc kubenswrapper[4793]: E0217 22:52:22.542434 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:52:30 crc kubenswrapper[4793]: I0217 22:52:30.539000 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:52:30 crc kubenswrapper[4793]: E0217 22:52:30.539910 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:52:33 crc kubenswrapper[4793]: I0217 22:52:33.543903 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:52:33 crc kubenswrapper[4793]: E0217 22:52:33.545181 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:52:43 crc kubenswrapper[4793]: I0217 22:52:43.540120 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:52:43 crc kubenswrapper[4793]: E0217 22:52:43.542168 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:52:48 crc kubenswrapper[4793]: I0217 22:52:48.539919 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:52:48 crc kubenswrapper[4793]: E0217 22:52:48.541184 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:52:56 crc kubenswrapper[4793]: I0217 22:52:56.538796 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:52:56 crc kubenswrapper[4793]: E0217 22:52:56.539452 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:53:01 crc kubenswrapper[4793]: I0217 22:53:01.538833 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:53:02 crc kubenswrapper[4793]: I0217 22:53:02.765546 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733"} Feb 17 22:53:05 crc kubenswrapper[4793]: I0217 22:53:05.596055 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:53:05 crc kubenswrapper[4793]: I0217 22:53:05.596575 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:53:05 crc kubenswrapper[4793]: E0217 22:53:05.598322 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733 is running failed: container process not found" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 22:53:05 crc kubenswrapper[4793]: E0217 22:53:05.599170 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733 is running failed: container process not found" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 22:53:05 crc kubenswrapper[4793]: E0217 22:53:05.599839 4793 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733 is running failed: container process not found" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 22:53:05 crc kubenswrapper[4793]: E0217 22:53:05.599920 4793 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" containerName="watcher-applier" Feb 17 22:53:05 crc kubenswrapper[4793]: I0217 22:53:05.812178 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" exitCode=1 Feb 17 22:53:05 crc kubenswrapper[4793]: I0217 22:53:05.812245 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733"} Feb 17 22:53:05 crc kubenswrapper[4793]: I0217 22:53:05.812343 4793 scope.go:117] "RemoveContainer" containerID="c6f9c711df3c4ca58e42b0fc0fc0803872a47d1700eb26631b0e7dc158268c70" Feb 17 22:53:05 crc kubenswrapper[4793]: I0217 22:53:05.813006 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:53:05 crc kubenswrapper[4793]: E0217 22:53:05.813288 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:53:10 crc kubenswrapper[4793]: I0217 22:53:10.538851 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:53:10 crc kubenswrapper[4793]: E0217 22:53:10.539625 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:53:15 crc kubenswrapper[4793]: I0217 22:53:15.595774 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:53:15 crc kubenswrapper[4793]: I0217 22:53:15.597601 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:53:15 crc kubenswrapper[4793]: I0217 22:53:15.598994 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:53:15 crc kubenswrapper[4793]: E0217 22:53:15.599340 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:53:21 crc kubenswrapper[4793]: I0217 22:53:21.540185 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:53:21 crc kubenswrapper[4793]: E0217 22:53:21.541804 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:53:28 crc kubenswrapper[4793]: I0217 22:53:28.539513 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:53:28 crc kubenswrapper[4793]: E0217 22:53:28.540658 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:53:33 crc kubenswrapper[4793]: I0217 22:53:33.539395 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:53:33 crc kubenswrapper[4793]: E0217 22:53:33.540262 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:53:40 crc kubenswrapper[4793]: I0217 22:53:40.538992 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:53:40 crc kubenswrapper[4793]: E0217 22:53:40.540153 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:53:48 crc kubenswrapper[4793]: I0217 22:53:48.540650 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:53:48 crc kubenswrapper[4793]: E0217 22:53:48.542125 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:53:51 crc kubenswrapper[4793]: I0217 22:53:51.539043 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:53:51 crc kubenswrapper[4793]: E0217 22:53:51.539850 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:54:00 crc kubenswrapper[4793]: I0217 22:54:00.539756 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:54:00 crc kubenswrapper[4793]: E0217 22:54:00.540737 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:54:02 crc kubenswrapper[4793]: I0217 22:54:02.539096 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:54:02 crc kubenswrapper[4793]: E0217 22:54:02.539700 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:54:12 crc kubenswrapper[4793]: I0217 22:54:12.539101 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:54:12 crc kubenswrapper[4793]: E0217 22:54:12.540112 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:54:16 crc kubenswrapper[4793]: I0217 22:54:16.538888 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:54:16 crc kubenswrapper[4793]: E0217 22:54:16.539888 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:54:23 crc kubenswrapper[4793]: I0217 22:54:23.539510 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:54:23 crc kubenswrapper[4793]: E0217 22:54:23.540243 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:54:27 crc kubenswrapper[4793]: I0217 22:54:27.539525 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:54:27 crc kubenswrapper[4793]: E0217 22:54:27.540604 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:54:38 crc kubenswrapper[4793]: I0217 22:54:38.539300 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:54:38 crc kubenswrapper[4793]: I0217 22:54:38.539749 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:54:38 crc kubenswrapper[4793]: E0217 22:54:38.539908 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:54:38 crc kubenswrapper[4793]: E0217 22:54:38.539967 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:54:50 crc kubenswrapper[4793]: I0217 22:54:50.539260 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:54:50 crc kubenswrapper[4793]: E0217 22:54:50.540249 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:54:52 crc kubenswrapper[4793]: I0217 22:54:52.538675 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:54:52 crc kubenswrapper[4793]: E0217 22:54:52.539540 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:55:01 crc kubenswrapper[4793]: I0217 22:55:01.539420 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:55:01 crc kubenswrapper[4793]: E0217 22:55:01.540841 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:55:04 crc kubenswrapper[4793]: I0217 22:55:04.538504 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:55:04 crc kubenswrapper[4793]: E0217 22:55:04.539368 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:55:12 crc kubenswrapper[4793]: I0217 22:55:12.539634 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:55:12 crc kubenswrapper[4793]: E0217 22:55:12.541015 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.107110 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-czg5t"] Feb 17 22:55:17 crc kubenswrapper[4793]: E0217 22:55:17.108071 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="extract-utilities" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.108093 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="extract-utilities" Feb 17 22:55:17 crc kubenswrapper[4793]: E0217 22:55:17.108123 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="registry-server" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.108131 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="registry-server" Feb 17 22:55:17 crc kubenswrapper[4793]: E0217 22:55:17.108157 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="extract-content" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.108165 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="extract-content" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.108434 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="841a5c4c-1259-458e-81d9-dcfad8716734" containerName="registry-server" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.110597 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.128697 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-czg5t"] Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.202093 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7k9\" (UniqueName: \"kubernetes.io/projected/1d4451a4-4e8f-4724-82fe-ee178735940a-kube-api-access-br7k9\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.202398 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-catalog-content\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.202488 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-utilities\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.305077 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-utilities\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.305237 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br7k9\" (UniqueName: \"kubernetes.io/projected/1d4451a4-4e8f-4724-82fe-ee178735940a-kube-api-access-br7k9\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.305308 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-catalog-content\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.305793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-catalog-content\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.305812 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-utilities\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.333561 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br7k9\" (UniqueName: \"kubernetes.io/projected/1d4451a4-4e8f-4724-82fe-ee178735940a-kube-api-access-br7k9\") pod \"redhat-operators-czg5t\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:17 crc kubenswrapper[4793]: I0217 22:55:17.448926 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:18 crc kubenswrapper[4793]: I0217 22:55:18.025981 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-czg5t"] Feb 17 22:55:18 crc kubenswrapper[4793]: I0217 22:55:18.242308 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerStarted","Data":"38f97bc428e498ad1df697665283460cfdf54ebd2538ed075ba1603d35a39199"} Feb 17 22:55:19 crc kubenswrapper[4793]: I0217 22:55:19.255189 4793 generic.go:334] "Generic (PLEG): container finished" podID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerID="abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5" exitCode=0 Feb 17 22:55:19 crc kubenswrapper[4793]: I0217 22:55:19.255294 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerDied","Data":"abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5"} Feb 17 22:55:19 crc kubenswrapper[4793]: I0217 22:55:19.538808 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:55:19 crc kubenswrapper[4793]: E0217 22:55:19.539155 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:55:21 crc kubenswrapper[4793]: I0217 22:55:21.287783 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerStarted","Data":"82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca"} Feb 17 22:55:24 crc kubenswrapper[4793]: I0217 22:55:24.321126 4793 generic.go:334] "Generic (PLEG): container finished" podID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerID="82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca" exitCode=0 Feb 17 22:55:24 crc kubenswrapper[4793]: I0217 22:55:24.321241 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerDied","Data":"82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca"} Feb 17 22:55:24 crc kubenswrapper[4793]: I0217 22:55:24.324669 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 22:55:24 crc kubenswrapper[4793]: I0217 22:55:24.539614 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:55:24 crc kubenswrapper[4793]: E0217 22:55:24.540301 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:55:26 crc kubenswrapper[4793]: I0217 22:55:26.348939 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerStarted","Data":"e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390"} Feb 17 22:55:26 crc kubenswrapper[4793]: I0217 22:55:26.379181 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-czg5t" podStartSLOduration=3.909351715 podStartE2EDuration="9.379136541s" podCreationTimestamp="2026-02-17 22:55:17 +0000 UTC" firstStartedPulling="2026-02-17 22:55:19.257867823 +0000 UTC m=+9994.549566134" lastFinishedPulling="2026-02-17 22:55:24.727652639 +0000 UTC m=+10000.019350960" observedRunningTime="2026-02-17 22:55:26.376825854 +0000 UTC m=+10001.668524165" watchObservedRunningTime="2026-02-17 22:55:26.379136541 +0000 UTC m=+10001.670834852" Feb 17 22:55:27 crc kubenswrapper[4793]: I0217 22:55:27.449368 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:27 crc kubenswrapper[4793]: I0217 22:55:27.451141 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:28 crc kubenswrapper[4793]: I0217 22:55:28.534764 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-czg5t" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" probeResult="failure" output=< Feb 17 22:55:28 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 22:55:28 crc kubenswrapper[4793]: > Feb 17 22:55:34 crc kubenswrapper[4793]: I0217 22:55:34.539319 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:55:34 crc kubenswrapper[4793]: E0217 22:55:34.540275 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:55:38 crc kubenswrapper[4793]: I0217 22:55:38.506326 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-czg5t" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" probeResult="failure" output=< Feb 17 22:55:38 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 22:55:38 crc kubenswrapper[4793]: > Feb 17 22:55:38 crc kubenswrapper[4793]: I0217 22:55:38.539344 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:55:38 crc kubenswrapper[4793]: E0217 22:55:38.539861 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:55:48 crc kubenswrapper[4793]: I0217 22:55:48.538921 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:55:48 crc kubenswrapper[4793]: E0217 22:55:48.539742 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:55:48 crc kubenswrapper[4793]: I0217 22:55:48.557015 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-czg5t" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" probeResult="failure" output=< Feb 17 22:55:48 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 22:55:48 crc kubenswrapper[4793]: > Feb 17 22:55:51 crc kubenswrapper[4793]: I0217 22:55:51.540933 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:55:51 crc kubenswrapper[4793]: E0217 22:55:51.542027 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:55:57 crc kubenswrapper[4793]: I0217 22:55:57.535830 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:57 crc kubenswrapper[4793]: I0217 22:55:57.600810 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:57 crc kubenswrapper[4793]: I0217 22:55:57.781029 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-czg5t"] Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.141656 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-czg5t" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" containerID="cri-o://e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390" gracePeriod=2 Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.642312 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.762061 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-catalog-content\") pod \"1d4451a4-4e8f-4724-82fe-ee178735940a\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.762223 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br7k9\" (UniqueName: \"kubernetes.io/projected/1d4451a4-4e8f-4724-82fe-ee178735940a-kube-api-access-br7k9\") pod \"1d4451a4-4e8f-4724-82fe-ee178735940a\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.763204 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-utilities\") pod \"1d4451a4-4e8f-4724-82fe-ee178735940a\" (UID: \"1d4451a4-4e8f-4724-82fe-ee178735940a\") " Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.763975 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-utilities" (OuterVolumeSpecName: "utilities") pod "1d4451a4-4e8f-4724-82fe-ee178735940a" (UID: "1d4451a4-4e8f-4724-82fe-ee178735940a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.775833 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4451a4-4e8f-4724-82fe-ee178735940a-kube-api-access-br7k9" (OuterVolumeSpecName: "kube-api-access-br7k9") pod "1d4451a4-4e8f-4724-82fe-ee178735940a" (UID: "1d4451a4-4e8f-4724-82fe-ee178735940a"). InnerVolumeSpecName "kube-api-access-br7k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.866785 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br7k9\" (UniqueName: \"kubernetes.io/projected/1d4451a4-4e8f-4724-82fe-ee178735940a-kube-api-access-br7k9\") on node \"crc\" DevicePath \"\"" Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.866848 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.935906 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d4451a4-4e8f-4724-82fe-ee178735940a" (UID: "1d4451a4-4e8f-4724-82fe-ee178735940a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:55:59 crc kubenswrapper[4793]: I0217 22:55:59.970395 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d4451a4-4e8f-4724-82fe-ee178735940a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.158083 4793 generic.go:334] "Generic (PLEG): container finished" podID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerID="e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390" exitCode=0 Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.158155 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerDied","Data":"e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390"} Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.158192 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czg5t" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.158229 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czg5t" event={"ID":"1d4451a4-4e8f-4724-82fe-ee178735940a","Type":"ContainerDied","Data":"38f97bc428e498ad1df697665283460cfdf54ebd2538ed075ba1603d35a39199"} Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.158263 4793 scope.go:117] "RemoveContainer" containerID="e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.202896 4793 scope.go:117] "RemoveContainer" containerID="82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.219740 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-czg5t"] Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.233092 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-czg5t"] Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.243842 4793 scope.go:117] "RemoveContainer" containerID="abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.295863 4793 scope.go:117] "RemoveContainer" containerID="e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390" Feb 17 22:56:00 crc kubenswrapper[4793]: E0217 22:56:00.296465 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390\": container with ID starting with e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390 not found: ID does not exist" containerID="e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.296529 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390"} err="failed to get container status \"e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390\": rpc error: code = NotFound desc = could not find container \"e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390\": container with ID starting with e0012a82ff226a81a85c0a9138b08ec1961af6176b2434933852f67ca496a390 not found: ID does not exist" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.296579 4793 scope.go:117] "RemoveContainer" containerID="82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca" Feb 17 22:56:00 crc kubenswrapper[4793]: E0217 22:56:00.297084 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca\": container with ID starting with 82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca not found: ID does not exist" containerID="82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.297137 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca"} err="failed to get container status \"82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca\": rpc error: code = NotFound desc = could not find container \"82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca\": container with ID starting with 82a6680ecfd4910bf4d21f212c5d45ff939f3707a25d66fd6a2a2f1bc8757dca not found: ID does not exist" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.297172 4793 scope.go:117] "RemoveContainer" containerID="abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5" Feb 17 22:56:00 crc kubenswrapper[4793]: E0217 22:56:00.297626 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5\": container with ID starting with abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5 not found: ID does not exist" containerID="abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.297672 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5"} err="failed to get container status \"abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5\": rpc error: code = NotFound desc = could not find container \"abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5\": container with ID starting with abb0d6090865f84a13192c465ba45086739a303450edab3c3b3927cedcae1bb5 not found: ID does not exist" Feb 17 22:56:00 crc kubenswrapper[4793]: I0217 22:56:00.538449 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:56:00 crc kubenswrapper[4793]: E0217 22:56:00.538978 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:56:01 crc kubenswrapper[4793]: I0217 22:56:01.552240 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" path="/var/lib/kubelet/pods/1d4451a4-4e8f-4724-82fe-ee178735940a/volumes" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.007829 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wkzx"] Feb 17 22:56:02 crc kubenswrapper[4793]: E0217 22:56:02.008578 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="extract-utilities" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.008616 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="extract-utilities" Feb 17 22:56:02 crc kubenswrapper[4793]: E0217 22:56:02.008669 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="extract-content" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.008683 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="extract-content" Feb 17 22:56:02 crc kubenswrapper[4793]: E0217 22:56:02.008736 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.008750 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.009180 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4451a4-4e8f-4724-82fe-ee178735940a" containerName="registry-server" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.011865 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.026844 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wkzx"] Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.123279 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-utilities\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.123429 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdpd\" (UniqueName: \"kubernetes.io/projected/f3a34f62-52e0-4f1c-8dbf-336520a8b012-kube-api-access-4sdpd\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.123475 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-catalog-content\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.226091 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-utilities\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.226668 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-utilities\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.227262 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sdpd\" (UniqueName: \"kubernetes.io/projected/f3a34f62-52e0-4f1c-8dbf-336520a8b012-kube-api-access-4sdpd\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.227375 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-catalog-content\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.227836 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-catalog-content\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.248939 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sdpd\" (UniqueName: \"kubernetes.io/projected/f3a34f62-52e0-4f1c-8dbf-336520a8b012-kube-api-access-4sdpd\") pod \"certified-operators-4wkzx\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.346822 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:02 crc kubenswrapper[4793]: I0217 22:56:02.871251 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wkzx"] Feb 17 22:56:03 crc kubenswrapper[4793]: I0217 22:56:03.199109 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerID="99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c" exitCode=0 Feb 17 22:56:03 crc kubenswrapper[4793]: I0217 22:56:03.199444 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkzx" event={"ID":"f3a34f62-52e0-4f1c-8dbf-336520a8b012","Type":"ContainerDied","Data":"99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c"} Feb 17 22:56:03 crc kubenswrapper[4793]: I0217 22:56:03.199475 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkzx" event={"ID":"f3a34f62-52e0-4f1c-8dbf-336520a8b012","Type":"ContainerStarted","Data":"66553002de72ff0dc8995eb543c585dba6bc2393895ab319de26ceea9bce7b2c"} Feb 17 22:56:05 crc kubenswrapper[4793]: I0217 22:56:05.237930 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerID="d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e" exitCode=0 Feb 17 22:56:05 crc kubenswrapper[4793]: I0217 22:56:05.238067 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkzx" event={"ID":"f3a34f62-52e0-4f1c-8dbf-336520a8b012","Type":"ContainerDied","Data":"d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e"} Feb 17 22:56:05 crc kubenswrapper[4793]: I0217 22:56:05.551972 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:56:05 crc kubenswrapper[4793]: E0217 22:56:05.552545 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:56:06 crc kubenswrapper[4793]: I0217 22:56:06.254902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkzx" event={"ID":"f3a34f62-52e0-4f1c-8dbf-336520a8b012","Type":"ContainerStarted","Data":"7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054"} Feb 17 22:56:06 crc kubenswrapper[4793]: I0217 22:56:06.290232 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wkzx" podStartSLOduration=2.729850528 podStartE2EDuration="5.290211541s" podCreationTimestamp="2026-02-17 22:56:01 +0000 UTC" firstStartedPulling="2026-02-17 22:56:03.201321233 +0000 UTC m=+10038.493019544" lastFinishedPulling="2026-02-17 22:56:05.761682246 +0000 UTC m=+10041.053380557" observedRunningTime="2026-02-17 22:56:06.274624086 +0000 UTC m=+10041.566322397" watchObservedRunningTime="2026-02-17 22:56:06.290211541 +0000 UTC m=+10041.581909852" Feb 17 22:56:12 crc kubenswrapper[4793]: I0217 22:56:12.347548 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:12 crc kubenswrapper[4793]: I0217 22:56:12.348250 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:12 crc kubenswrapper[4793]: I0217 22:56:12.424366 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:13 crc kubenswrapper[4793]: I0217 22:56:13.426515 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:13 crc kubenswrapper[4793]: I0217 22:56:13.500724 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wkzx"] Feb 17 22:56:13 crc kubenswrapper[4793]: I0217 22:56:13.538630 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:56:13 crc kubenswrapper[4793]: E0217 22:56:13.539108 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.374721 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wkzx" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="registry-server" containerID="cri-o://7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054" gracePeriod=2 Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.903327 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.983474 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sdpd\" (UniqueName: \"kubernetes.io/projected/f3a34f62-52e0-4f1c-8dbf-336520a8b012-kube-api-access-4sdpd\") pod \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.983574 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-catalog-content\") pod \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.983661 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-utilities\") pod \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\" (UID: \"f3a34f62-52e0-4f1c-8dbf-336520a8b012\") " Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.984559 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-utilities" (OuterVolumeSpecName: "utilities") pod "f3a34f62-52e0-4f1c-8dbf-336520a8b012" (UID: "f3a34f62-52e0-4f1c-8dbf-336520a8b012"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:56:15 crc kubenswrapper[4793]: I0217 22:56:15.999880 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a34f62-52e0-4f1c-8dbf-336520a8b012-kube-api-access-4sdpd" (OuterVolumeSpecName: "kube-api-access-4sdpd") pod "f3a34f62-52e0-4f1c-8dbf-336520a8b012" (UID: "f3a34f62-52e0-4f1c-8dbf-336520a8b012"). InnerVolumeSpecName "kube-api-access-4sdpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.046541 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3a34f62-52e0-4f1c-8dbf-336520a8b012" (UID: "f3a34f62-52e0-4f1c-8dbf-336520a8b012"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.086573 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sdpd\" (UniqueName: \"kubernetes.io/projected/f3a34f62-52e0-4f1c-8dbf-336520a8b012-kube-api-access-4sdpd\") on node \"crc\" DevicePath \"\"" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.086630 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.086651 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a34f62-52e0-4f1c-8dbf-336520a8b012-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.391039 4793 generic.go:334] "Generic (PLEG): container finished" podID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerID="7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054" exitCode=0 Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.391102 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkzx" event={"ID":"f3a34f62-52e0-4f1c-8dbf-336520a8b012","Type":"ContainerDied","Data":"7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054"} Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.391144 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wkzx" event={"ID":"f3a34f62-52e0-4f1c-8dbf-336520a8b012","Type":"ContainerDied","Data":"66553002de72ff0dc8995eb543c585dba6bc2393895ab319de26ceea9bce7b2c"} Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.391141 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wkzx" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.391199 4793 scope.go:117] "RemoveContainer" containerID="7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.437472 4793 scope.go:117] "RemoveContainer" containerID="d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.453910 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wkzx"] Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.467383 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wkzx"] Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.481596 4793 scope.go:117] "RemoveContainer" containerID="99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.540331 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:56:16 crc kubenswrapper[4793]: E0217 22:56:16.540644 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.565403 4793 scope.go:117] "RemoveContainer" containerID="7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054" Feb 17 22:56:16 crc kubenswrapper[4793]: E0217 22:56:16.565845 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054\": container with ID starting with 7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054 not found: ID does not exist" containerID="7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.565884 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054"} err="failed to get container status \"7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054\": rpc error: code = NotFound desc = could not find container \"7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054\": container with ID starting with 7b2318d1f495762db254563657084248ac8edcb11581658d079aa6e779c0f054 not found: ID does not exist" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.565911 4793 scope.go:117] "RemoveContainer" containerID="d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e" Feb 17 22:56:16 crc kubenswrapper[4793]: E0217 22:56:16.566332 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e\": container with ID starting with d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e not found: ID does not exist" containerID="d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.566417 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e"} err="failed to get container status \"d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e\": rpc error: code = NotFound desc = could not find container \"d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e\": container with ID starting with d936f179d73c5ee0a4ff6382701d62ff700664d8ecf52b0e1731c90b04eba11e not found: ID does not exist" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.566466 4793 scope.go:117] "RemoveContainer" containerID="99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c" Feb 17 22:56:16 crc kubenswrapper[4793]: E0217 22:56:16.566964 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c\": container with ID starting with 99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c not found: ID does not exist" containerID="99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c" Feb 17 22:56:16 crc kubenswrapper[4793]: I0217 22:56:16.566995 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c"} err="failed to get container status \"99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c\": rpc error: code = NotFound desc = could not find container \"99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c\": container with ID starting with 99ef575c5161bd18b4f92f11dc3602b8720f99ebbd71fb6916b48d9fc84af62c not found: ID does not exist" Feb 17 22:56:17 crc kubenswrapper[4793]: I0217 22:56:17.557123 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" path="/var/lib/kubelet/pods/f3a34f62-52e0-4f1c-8dbf-336520a8b012/volumes" Feb 17 22:56:26 crc kubenswrapper[4793]: I0217 22:56:26.539033 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:56:26 crc kubenswrapper[4793]: E0217 22:56:26.540065 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:56:28 crc kubenswrapper[4793]: I0217 22:56:28.539798 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:56:28 crc kubenswrapper[4793]: E0217 22:56:28.540834 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:56:40 crc kubenswrapper[4793]: I0217 22:56:40.540018 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:56:40 crc kubenswrapper[4793]: E0217 22:56:40.541169 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 22:56:43 crc kubenswrapper[4793]: I0217 22:56:43.539364 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:56:43 crc kubenswrapper[4793]: E0217 22:56:43.540544 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:56:52 crc kubenswrapper[4793]: I0217 22:56:52.539038 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 22:56:53 crc kubenswrapper[4793]: I0217 22:56:53.988971 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"5b0533bbdb20048b64388bd01e5e1fd24c502c1fe2827f5ee4bfdcf9baa7e555"} Feb 17 22:56:54 crc kubenswrapper[4793]: I0217 22:56:54.538761 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:56:54 crc kubenswrapper[4793]: E0217 22:56:54.539263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:57:09 crc kubenswrapper[4793]: I0217 22:57:09.539829 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:57:09 crc kubenswrapper[4793]: E0217 22:57:09.540608 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:57:24 crc kubenswrapper[4793]: I0217 22:57:24.538592 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:57:24 crc kubenswrapper[4793]: E0217 22:57:24.539295 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:57:38 crc kubenswrapper[4793]: I0217 22:57:38.540423 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:57:38 crc kubenswrapper[4793]: E0217 22:57:38.541441 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:57:51 crc kubenswrapper[4793]: I0217 22:57:51.538542 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:57:51 crc kubenswrapper[4793]: E0217 22:57:51.539402 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:58:02 crc kubenswrapper[4793]: I0217 22:58:02.539568 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:58:02 crc kubenswrapper[4793]: E0217 22:58:02.540939 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:58:16 crc kubenswrapper[4793]: I0217 22:58:16.539503 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:58:16 crc kubenswrapper[4793]: I0217 22:58:16.992875 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871"} Feb 17 22:58:20 crc kubenswrapper[4793]: I0217 22:58:20.028485 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" exitCode=1 Feb 17 22:58:20 crc kubenswrapper[4793]: I0217 22:58:20.028574 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871"} Feb 17 22:58:20 crc kubenswrapper[4793]: I0217 22:58:20.029377 4793 scope.go:117] "RemoveContainer" containerID="d2b70282cad44b593e28b08747aaf3b7f0ff84ec3a05e82640f11b18e6345733" Feb 17 22:58:20 crc kubenswrapper[4793]: I0217 22:58:20.030466 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:58:20 crc kubenswrapper[4793]: E0217 22:58:20.031049 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:58:20 crc kubenswrapper[4793]: I0217 22:58:20.596707 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 22:58:21 crc kubenswrapper[4793]: I0217 22:58:21.042532 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:58:21 crc kubenswrapper[4793]: E0217 22:58:21.042882 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:58:25 crc kubenswrapper[4793]: I0217 22:58:25.596097 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:58:25 crc kubenswrapper[4793]: I0217 22:58:25.596637 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:58:25 crc kubenswrapper[4793]: I0217 22:58:25.596651 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 22:58:25 crc kubenswrapper[4793]: I0217 22:58:25.597561 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:58:25 crc kubenswrapper[4793]: E0217 22:58:25.597889 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:58:39 crc kubenswrapper[4793]: I0217 22:58:39.538795 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:58:39 crc kubenswrapper[4793]: E0217 22:58:39.539864 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:58:54 crc kubenswrapper[4793]: I0217 22:58:54.539467 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:58:54 crc kubenswrapper[4793]: E0217 22:58:54.540238 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:59:06 crc kubenswrapper[4793]: I0217 22:59:06.538958 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:59:06 crc kubenswrapper[4793]: E0217 22:59:06.539926 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:59:20 crc kubenswrapper[4793]: I0217 22:59:20.102049 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:59:20 crc kubenswrapper[4793]: I0217 22:59:20.102830 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:59:21 crc kubenswrapper[4793]: I0217 22:59:21.539124 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:59:21 crc kubenswrapper[4793]: E0217 22:59:21.539707 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:59:34 crc kubenswrapper[4793]: I0217 22:59:34.539364 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:59:34 crc kubenswrapper[4793]: E0217 22:59:34.540113 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:59:46 crc kubenswrapper[4793]: I0217 22:59:46.539242 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:59:46 crc kubenswrapper[4793]: E0217 22:59:46.540269 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 22:59:50 crc kubenswrapper[4793]: I0217 22:59:50.102435 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 22:59:50 crc kubenswrapper[4793]: I0217 22:59:50.103068 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 22:59:59 crc kubenswrapper[4793]: I0217 22:59:59.540014 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 22:59:59 crc kubenswrapper[4793]: E0217 22:59:59.540810 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.199428 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb"] Feb 17 23:00:00 crc kubenswrapper[4793]: E0217 23:00:00.200360 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="extract-content" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.200381 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="extract-content" Feb 17 23:00:00 crc kubenswrapper[4793]: E0217 23:00:00.200396 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="registry-server" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.200405 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="registry-server" Feb 17 23:00:00 crc kubenswrapper[4793]: E0217 23:00:00.200422 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="extract-utilities" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.200430 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="extract-utilities" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.200726 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a34f62-52e0-4f1c-8dbf-336520a8b012" containerName="registry-server" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.201619 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.204747 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.205409 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.216505 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb"] Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.327905 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f716c2-217a-4819-bbcd-566bdaa70424-secret-volume\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.328007 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f716c2-217a-4819-bbcd-566bdaa70424-config-volume\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.328079 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrnzn\" (UniqueName: \"kubernetes.io/projected/60f716c2-217a-4819-bbcd-566bdaa70424-kube-api-access-zrnzn\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.430632 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f716c2-217a-4819-bbcd-566bdaa70424-secret-volume\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.430773 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f716c2-217a-4819-bbcd-566bdaa70424-config-volume\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.430864 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrnzn\" (UniqueName: \"kubernetes.io/projected/60f716c2-217a-4819-bbcd-566bdaa70424-kube-api-access-zrnzn\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.433626 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f716c2-217a-4819-bbcd-566bdaa70424-config-volume\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.438342 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f716c2-217a-4819-bbcd-566bdaa70424-secret-volume\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.449052 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrnzn\" (UniqueName: \"kubernetes.io/projected/60f716c2-217a-4819-bbcd-566bdaa70424-kube-api-access-zrnzn\") pod \"collect-profiles-29522820-qwcdb\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:00 crc kubenswrapper[4793]: I0217 23:00:00.533754 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:01 crc kubenswrapper[4793]: I0217 23:00:01.002899 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb"] Feb 17 23:00:01 crc kubenswrapper[4793]: I0217 23:00:01.193291 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" event={"ID":"60f716c2-217a-4819-bbcd-566bdaa70424","Type":"ContainerStarted","Data":"e4858bfbd49e1c1ea8c573e1062a76e297ce7aaffd9ea8b5291d8e43d1b609d4"} Feb 17 23:00:01 crc kubenswrapper[4793]: I0217 23:00:01.193796 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" event={"ID":"60f716c2-217a-4819-bbcd-566bdaa70424","Type":"ContainerStarted","Data":"04d392e0c263f12b4819c4fb892fae629208ea83cd6c34fdc4cdd88f0074e011"} Feb 17 23:00:01 crc kubenswrapper[4793]: I0217 23:00:01.212143 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" podStartSLOduration=1.212125906 podStartE2EDuration="1.212125906s" podCreationTimestamp="2026-02-17 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 23:00:01.207097022 +0000 UTC m=+10276.498795323" watchObservedRunningTime="2026-02-17 23:00:01.212125906 +0000 UTC m=+10276.503824217" Feb 17 23:00:02 crc kubenswrapper[4793]: I0217 23:00:02.208600 4793 generic.go:334] "Generic (PLEG): container finished" podID="60f716c2-217a-4819-bbcd-566bdaa70424" containerID="e4858bfbd49e1c1ea8c573e1062a76e297ce7aaffd9ea8b5291d8e43d1b609d4" exitCode=0 Feb 17 23:00:02 crc kubenswrapper[4793]: I0217 23:00:02.208674 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" event={"ID":"60f716c2-217a-4819-bbcd-566bdaa70424","Type":"ContainerDied","Data":"e4858bfbd49e1c1ea8c573e1062a76e297ce7aaffd9ea8b5291d8e43d1b609d4"} Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.617013 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.698636 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f716c2-217a-4819-bbcd-566bdaa70424-secret-volume\") pod \"60f716c2-217a-4819-bbcd-566bdaa70424\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.698900 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f716c2-217a-4819-bbcd-566bdaa70424-config-volume\") pod \"60f716c2-217a-4819-bbcd-566bdaa70424\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.698984 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrnzn\" (UniqueName: \"kubernetes.io/projected/60f716c2-217a-4819-bbcd-566bdaa70424-kube-api-access-zrnzn\") pod \"60f716c2-217a-4819-bbcd-566bdaa70424\" (UID: \"60f716c2-217a-4819-bbcd-566bdaa70424\") " Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.700049 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f716c2-217a-4819-bbcd-566bdaa70424-config-volume" (OuterVolumeSpecName: "config-volume") pod "60f716c2-217a-4819-bbcd-566bdaa70424" (UID: "60f716c2-217a-4819-bbcd-566bdaa70424"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.705532 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f716c2-217a-4819-bbcd-566bdaa70424-kube-api-access-zrnzn" (OuterVolumeSpecName: "kube-api-access-zrnzn") pod "60f716c2-217a-4819-bbcd-566bdaa70424" (UID: "60f716c2-217a-4819-bbcd-566bdaa70424"). InnerVolumeSpecName "kube-api-access-zrnzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.718231 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f716c2-217a-4819-bbcd-566bdaa70424-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60f716c2-217a-4819-bbcd-566bdaa70424" (UID: "60f716c2-217a-4819-bbcd-566bdaa70424"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.801015 4793 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f716c2-217a-4819-bbcd-566bdaa70424-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.801321 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrnzn\" (UniqueName: \"kubernetes.io/projected/60f716c2-217a-4819-bbcd-566bdaa70424-kube-api-access-zrnzn\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:03 crc kubenswrapper[4793]: I0217 23:00:03.801350 4793 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60f716c2-217a-4819-bbcd-566bdaa70424-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:04 crc kubenswrapper[4793]: I0217 23:00:04.234280 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" event={"ID":"60f716c2-217a-4819-bbcd-566bdaa70424","Type":"ContainerDied","Data":"04d392e0c263f12b4819c4fb892fae629208ea83cd6c34fdc4cdd88f0074e011"} Feb 17 23:00:04 crc kubenswrapper[4793]: I0217 23:00:04.234333 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d392e0c263f12b4819c4fb892fae629208ea83cd6c34fdc4cdd88f0074e011" Feb 17 23:00:04 crc kubenswrapper[4793]: I0217 23:00:04.234344 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522820-qwcdb" Feb 17 23:00:04 crc kubenswrapper[4793]: I0217 23:00:04.284719 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q"] Feb 17 23:00:04 crc kubenswrapper[4793]: I0217 23:00:04.294262 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522775-zjf6q"] Feb 17 23:00:05 crc kubenswrapper[4793]: I0217 23:00:05.556446 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc" path="/var/lib/kubelet/pods/9b9062b7-c0c8-4634-bed3-b79a5e3fcfdc/volumes" Feb 17 23:00:10 crc kubenswrapper[4793]: I0217 23:00:10.539780 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:00:10 crc kubenswrapper[4793]: E0217 23:00:10.540763 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.101516 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.102190 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.102251 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.103398 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b0533bbdb20048b64388bd01e5e1fd24c502c1fe2827f5ee4bfdcf9baa7e555"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.103507 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://5b0533bbdb20048b64388bd01e5e1fd24c502c1fe2827f5ee4bfdcf9baa7e555" gracePeriod=600 Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.470744 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="5b0533bbdb20048b64388bd01e5e1fd24c502c1fe2827f5ee4bfdcf9baa7e555" exitCode=0 Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.471016 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"5b0533bbdb20048b64388bd01e5e1fd24c502c1fe2827f5ee4bfdcf9baa7e555"} Feb 17 23:00:20 crc kubenswrapper[4793]: I0217 23:00:20.471121 4793 scope.go:117] "RemoveContainer" containerID="932a952c18bcb817140c8254f8f293b9f8150290160a4bfaff70ea8d99481a97" Feb 17 23:00:21 crc kubenswrapper[4793]: I0217 23:00:21.503196 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea"} Feb 17 23:00:21 crc kubenswrapper[4793]: I0217 23:00:21.538759 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:00:21 crc kubenswrapper[4793]: E0217 23:00:21.539288 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:00:28 crc kubenswrapper[4793]: I0217 23:00:28.133839 4793 scope.go:117] "RemoveContainer" containerID="54a26d4eae1e40742759e55999dab60891a3e4e6ffaf05ae02915eef645cf154" Feb 17 23:00:36 crc kubenswrapper[4793]: I0217 23:00:36.539495 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:00:36 crc kubenswrapper[4793]: E0217 23:00:36.542390 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:00:51 crc kubenswrapper[4793]: I0217 23:00:51.538794 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:00:51 crc kubenswrapper[4793]: E0217 23:00:51.539518 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:00:56 crc kubenswrapper[4793]: I0217 23:00:56.939570 4793 generic.go:334] "Generic (PLEG): container finished" podID="75a21cff-8e4b-4844-8717-b4f483fa282b" containerID="1f1738d6ab1b7ab24ef565bdaa13cd8e56a856baeca65fa3e237b4fe1045021a" exitCode=1 Feb 17 23:00:56 crc kubenswrapper[4793]: I0217 23:00:56.939716 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75a21cff-8e4b-4844-8717-b4f483fa282b","Type":"ContainerDied","Data":"1f1738d6ab1b7ab24ef565bdaa13cd8e56a856baeca65fa3e237b4fe1045021a"} Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.435218 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.569645 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-config-data\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.569700 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ssh-key\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.569888 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.569928 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.570642 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-workdir\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.570682 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ca-certs\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.570755 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-temporary\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.570900 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config-secret\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.570935 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4md\" (UniqueName: \"kubernetes.io/projected/75a21cff-8e4b-4844-8717-b4f483fa282b-kube-api-access-bf4md\") pod \"75a21cff-8e4b-4844-8717-b4f483fa282b\" (UID: \"75a21cff-8e4b-4844-8717-b4f483fa282b\") " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.576130 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-config-data" (OuterVolumeSpecName: "config-data") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.576645 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.578122 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.583655 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.594528 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a21cff-8e4b-4844-8717-b4f483fa282b-kube-api-access-bf4md" (OuterVolumeSpecName: "kube-api-access-bf4md") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "kube-api-access-bf4md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.602193 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.610884 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.633158 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.647216 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "75a21cff-8e4b-4844-8717-b4f483fa282b" (UID: "75a21cff-8e4b-4844-8717-b4f483fa282b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679081 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679112 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4md\" (UniqueName: \"kubernetes.io/projected/75a21cff-8e4b-4844-8717-b4f483fa282b-kube-api-access-bf4md\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679139 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679244 4793 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679274 4793 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679283 4793 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75a21cff-8e4b-4844-8717-b4f483fa282b-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679294 4793 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679304 4793 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/75a21cff-8e4b-4844-8717-b4f483fa282b-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.679312 4793 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/75a21cff-8e4b-4844-8717-b4f483fa282b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.698646 4793 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.794671 4793 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.965681 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"75a21cff-8e4b-4844-8717-b4f483fa282b","Type":"ContainerDied","Data":"24b2059af5c998ded8fb1a7e4fb8d862361f433ff118dbff602035f53ce60a68"} Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.966096 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b2059af5c998ded8fb1a7e4fb8d862361f433ff118dbff602035f53ce60a68" Feb 17 23:00:58 crc kubenswrapper[4793]: I0217 23:00:58.965795 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.167731 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522821-g8kbk"] Feb 17 23:01:00 crc kubenswrapper[4793]: E0217 23:01:00.168565 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f716c2-217a-4819-bbcd-566bdaa70424" containerName="collect-profiles" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.168582 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f716c2-217a-4819-bbcd-566bdaa70424" containerName="collect-profiles" Feb 17 23:01:00 crc kubenswrapper[4793]: E0217 23:01:00.168626 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a21cff-8e4b-4844-8717-b4f483fa282b" containerName="tempest-tests-tempest-tests-runner" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.168645 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a21cff-8e4b-4844-8717-b4f483fa282b" containerName="tempest-tests-tempest-tests-runner" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.168912 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a21cff-8e4b-4844-8717-b4f483fa282b" containerName="tempest-tests-tempest-tests-runner" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.168926 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f716c2-217a-4819-bbcd-566bdaa70424" containerName="collect-profiles" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.169800 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.178870 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522821-g8kbk"] Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.228712 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-fernet-keys\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.228807 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-config-data\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.228874 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zhv\" (UniqueName: \"kubernetes.io/projected/29467665-4541-4f76-a2bd-c60067bfca4e-kube-api-access-p8zhv\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.228892 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-combined-ca-bundle\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.330738 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-fernet-keys\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.330838 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-config-data\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.330906 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zhv\" (UniqueName: \"kubernetes.io/projected/29467665-4541-4f76-a2bd-c60067bfca4e-kube-api-access-p8zhv\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.330928 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-combined-ca-bundle\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.337212 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-combined-ca-bundle\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.342074 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-config-data\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.343407 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-fernet-keys\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.350840 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zhv\" (UniqueName: \"kubernetes.io/projected/29467665-4541-4f76-a2bd-c60067bfca4e-kube-api-access-p8zhv\") pod \"keystone-cron-29522821-g8kbk\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.485596 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.969466 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522821-g8kbk"] Feb 17 23:01:00 crc kubenswrapper[4793]: W0217 23:01:00.970187 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29467665_4541_4f76_a2bd_c60067bfca4e.slice/crio-a756005e002d57e342d42ada9d3dfdf1d999c2be94742e75039342168d5151c8 WatchSource:0}: Error finding container a756005e002d57e342d42ada9d3dfdf1d999c2be94742e75039342168d5151c8: Status 404 returned error can't find the container with id a756005e002d57e342d42ada9d3dfdf1d999c2be94742e75039342168d5151c8 Feb 17 23:01:00 crc kubenswrapper[4793]: I0217 23:01:00.993272 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522821-g8kbk" event={"ID":"29467665-4541-4f76-a2bd-c60067bfca4e","Type":"ContainerStarted","Data":"a756005e002d57e342d42ada9d3dfdf1d999c2be94742e75039342168d5151c8"} Feb 17 23:01:01 crc kubenswrapper[4793]: I0217 23:01:01.947488 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 23:01:01 crc kubenswrapper[4793]: I0217 23:01:01.949633 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:01 crc kubenswrapper[4793]: I0217 23:01:01.954408 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-gsqr7" Feb 17 23:01:01 crc kubenswrapper[4793]: I0217 23:01:01.971827 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.006593 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522821-g8kbk" event={"ID":"29467665-4541-4f76-a2bd-c60067bfca4e","Type":"ContainerStarted","Data":"912a1c938477a8082cf5a689343fa5f9095b50877ebd08f8c634d32b5f4b27bb"} Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.047034 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522821-g8kbk" podStartSLOduration=2.046998043 podStartE2EDuration="2.046998043s" podCreationTimestamp="2026-02-17 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 23:01:02.028163218 +0000 UTC m=+10337.319861569" watchObservedRunningTime="2026-02-17 23:01:02.046998043 +0000 UTC m=+10337.338696394" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.070483 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj7sh\" (UniqueName: \"kubernetes.io/projected/585df3e1-a4cf-4c93-b847-cfc1f6ecd207-kube-api-access-kj7sh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.070559 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.172319 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj7sh\" (UniqueName: \"kubernetes.io/projected/585df3e1-a4cf-4c93-b847-cfc1f6ecd207-kube-api-access-kj7sh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.172367 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.173526 4793 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.208916 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj7sh\" (UniqueName: \"kubernetes.io/projected/585df3e1-a4cf-4c93-b847-cfc1f6ecd207-kube-api-access-kj7sh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.226165 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"585df3e1-a4cf-4c93-b847-cfc1f6ecd207\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.289148 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.539062 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:01:02 crc kubenswrapper[4793]: E0217 23:01:02.539451 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.773750 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 17 23:01:02 crc kubenswrapper[4793]: I0217 23:01:02.780233 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 23:01:03 crc kubenswrapper[4793]: I0217 23:01:03.021116 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"585df3e1-a4cf-4c93-b847-cfc1f6ecd207","Type":"ContainerStarted","Data":"79e5ac012bdcf138e6303147afdd1f43405df116d2ba283cce962ef8b43be505"} Feb 17 23:01:04 crc kubenswrapper[4793]: I0217 23:01:04.032181 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"585df3e1-a4cf-4c93-b847-cfc1f6ecd207","Type":"ContainerStarted","Data":"aa8c2237577a32e94205675cabcd05c14612e56fa62b969dc5343ec063dfd954"} Feb 17 23:01:04 crc kubenswrapper[4793]: I0217 23:01:04.052267 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.039306294 podStartE2EDuration="3.052250903s" podCreationTimestamp="2026-02-17 23:01:01 +0000 UTC" firstStartedPulling="2026-02-17 23:01:02.780008979 +0000 UTC m=+10338.071707290" lastFinishedPulling="2026-02-17 23:01:03.792953588 +0000 UTC m=+10339.084651899" observedRunningTime="2026-02-17 23:01:04.049813223 +0000 UTC m=+10339.341511544" watchObservedRunningTime="2026-02-17 23:01:04.052250903 +0000 UTC m=+10339.343949214" Feb 17 23:01:05 crc kubenswrapper[4793]: I0217 23:01:05.053566 4793 generic.go:334] "Generic (PLEG): container finished" podID="29467665-4541-4f76-a2bd-c60067bfca4e" containerID="912a1c938477a8082cf5a689343fa5f9095b50877ebd08f8c634d32b5f4b27bb" exitCode=0 Feb 17 23:01:05 crc kubenswrapper[4793]: I0217 23:01:05.053652 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522821-g8kbk" event={"ID":"29467665-4541-4f76-a2bd-c60067bfca4e","Type":"ContainerDied","Data":"912a1c938477a8082cf5a689343fa5f9095b50877ebd08f8c634d32b5f4b27bb"} Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.487534 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.567151 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-combined-ca-bundle\") pod \"29467665-4541-4f76-a2bd-c60067bfca4e\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.567265 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-config-data\") pod \"29467665-4541-4f76-a2bd-c60067bfca4e\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.567442 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-fernet-keys\") pod \"29467665-4541-4f76-a2bd-c60067bfca4e\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.567464 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zhv\" (UniqueName: \"kubernetes.io/projected/29467665-4541-4f76-a2bd-c60067bfca4e-kube-api-access-p8zhv\") pod \"29467665-4541-4f76-a2bd-c60067bfca4e\" (UID: \"29467665-4541-4f76-a2bd-c60067bfca4e\") " Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.583949 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "29467665-4541-4f76-a2bd-c60067bfca4e" (UID: "29467665-4541-4f76-a2bd-c60067bfca4e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.584043 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29467665-4541-4f76-a2bd-c60067bfca4e-kube-api-access-p8zhv" (OuterVolumeSpecName: "kube-api-access-p8zhv") pod "29467665-4541-4f76-a2bd-c60067bfca4e" (UID: "29467665-4541-4f76-a2bd-c60067bfca4e"). InnerVolumeSpecName "kube-api-access-p8zhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.633372 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29467665-4541-4f76-a2bd-c60067bfca4e" (UID: "29467665-4541-4f76-a2bd-c60067bfca4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.660353 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-config-data" (OuterVolumeSpecName: "config-data") pod "29467665-4541-4f76-a2bd-c60067bfca4e" (UID: "29467665-4541-4f76-a2bd-c60067bfca4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.670383 4793 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.670422 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8zhv\" (UniqueName: \"kubernetes.io/projected/29467665-4541-4f76-a2bd-c60067bfca4e-kube-api-access-p8zhv\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.670436 4793 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:06 crc kubenswrapper[4793]: I0217 23:01:06.670448 4793 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29467665-4541-4f76-a2bd-c60067bfca4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:07 crc kubenswrapper[4793]: I0217 23:01:07.079078 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522821-g8kbk" event={"ID":"29467665-4541-4f76-a2bd-c60067bfca4e","Type":"ContainerDied","Data":"a756005e002d57e342d42ada9d3dfdf1d999c2be94742e75039342168d5151c8"} Feb 17 23:01:07 crc kubenswrapper[4793]: I0217 23:01:07.079123 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a756005e002d57e342d42ada9d3dfdf1d999c2be94742e75039342168d5151c8" Feb 17 23:01:07 crc kubenswrapper[4793]: I0217 23:01:07.079170 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522821-g8kbk" Feb 17 23:01:15 crc kubenswrapper[4793]: I0217 23:01:15.547204 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:01:15 crc kubenswrapper[4793]: E0217 23:01:15.548156 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.102261 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ndrtz"] Feb 17 23:01:24 crc kubenswrapper[4793]: E0217 23:01:24.103518 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29467665-4541-4f76-a2bd-c60067bfca4e" containerName="keystone-cron" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.103543 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="29467665-4541-4f76-a2bd-c60067bfca4e" containerName="keystone-cron" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.103886 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="29467665-4541-4f76-a2bd-c60067bfca4e" containerName="keystone-cron" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.106134 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.118415 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndrtz"] Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.187635 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hvd\" (UniqueName: \"kubernetes.io/projected/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-kube-api-access-k5hvd\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.187678 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-catalog-content\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.187725 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-utilities\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.291307 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hvd\" (UniqueName: \"kubernetes.io/projected/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-kube-api-access-k5hvd\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.291404 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-catalog-content\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.291442 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-utilities\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.292059 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-utilities\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.292172 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-catalog-content\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.671297 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hvd\" (UniqueName: \"kubernetes.io/projected/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-kube-api-access-k5hvd\") pod \"community-operators-ndrtz\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:24 crc kubenswrapper[4793]: I0217 23:01:24.752107 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:25 crc kubenswrapper[4793]: I0217 23:01:25.279405 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ndrtz"] Feb 17 23:01:25 crc kubenswrapper[4793]: I0217 23:01:25.345888 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerStarted","Data":"8c532f1398195fa6bbe6fc89a430ab8be1b1ae59c91081afe9573fda488d3066"} Feb 17 23:01:26 crc kubenswrapper[4793]: I0217 23:01:26.363217 4793 generic.go:334] "Generic (PLEG): container finished" podID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerID="2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b" exitCode=0 Feb 17 23:01:26 crc kubenswrapper[4793]: I0217 23:01:26.363395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerDied","Data":"2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b"} Feb 17 23:01:27 crc kubenswrapper[4793]: I0217 23:01:27.378644 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerStarted","Data":"7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce"} Feb 17 23:01:29 crc kubenswrapper[4793]: I0217 23:01:29.409820 4793 generic.go:334] "Generic (PLEG): container finished" podID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerID="7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce" exitCode=0 Feb 17 23:01:29 crc kubenswrapper[4793]: I0217 23:01:29.409970 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerDied","Data":"7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce"} Feb 17 23:01:30 crc kubenswrapper[4793]: I0217 23:01:30.423063 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerStarted","Data":"5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69"} Feb 17 23:01:30 crc kubenswrapper[4793]: I0217 23:01:30.448039 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ndrtz" podStartSLOduration=3.007108108 podStartE2EDuration="6.44801985s" podCreationTimestamp="2026-02-17 23:01:24 +0000 UTC" firstStartedPulling="2026-02-17 23:01:26.365727395 +0000 UTC m=+10361.657425716" lastFinishedPulling="2026-02-17 23:01:29.806639137 +0000 UTC m=+10365.098337458" observedRunningTime="2026-02-17 23:01:30.4407385 +0000 UTC m=+10365.732436831" watchObservedRunningTime="2026-02-17 23:01:30.44801985 +0000 UTC m=+10365.739718161" Feb 17 23:01:30 crc kubenswrapper[4793]: I0217 23:01:30.539401 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:01:30 crc kubenswrapper[4793]: E0217 23:01:30.539788 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:01:34 crc kubenswrapper[4793]: I0217 23:01:34.753248 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:34 crc kubenswrapper[4793]: I0217 23:01:34.753909 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:34 crc kubenswrapper[4793]: I0217 23:01:34.839380 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:35 crc kubenswrapper[4793]: I0217 23:01:35.565284 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:35 crc kubenswrapper[4793]: I0217 23:01:35.647106 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ndrtz"] Feb 17 23:01:37 crc kubenswrapper[4793]: I0217 23:01:37.525144 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ndrtz" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="registry-server" containerID="cri-o://5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69" gracePeriod=2 Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.187666 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.292357 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5hvd\" (UniqueName: \"kubernetes.io/projected/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-kube-api-access-k5hvd\") pod \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.292494 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-catalog-content\") pod \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.292526 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-utilities\") pod \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\" (UID: \"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da\") " Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.294164 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-utilities" (OuterVolumeSpecName: "utilities") pod "4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" (UID: "4cd5b2e2-84ad-49df-aa67-ddb0cccad7da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.299718 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-kube-api-access-k5hvd" (OuterVolumeSpecName: "kube-api-access-k5hvd") pod "4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" (UID: "4cd5b2e2-84ad-49df-aa67-ddb0cccad7da"). InnerVolumeSpecName "kube-api-access-k5hvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.341571 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" (UID: "4cd5b2e2-84ad-49df-aa67-ddb0cccad7da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.394806 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.394839 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.394850 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5hvd\" (UniqueName: \"kubernetes.io/projected/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da-kube-api-access-k5hvd\") on node \"crc\" DevicePath \"\"" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.549163 4793 generic.go:334] "Generic (PLEG): container finished" podID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerID="5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69" exitCode=0 Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.549297 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ndrtz" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.553445 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerDied","Data":"5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69"} Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.553484 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ndrtz" event={"ID":"4cd5b2e2-84ad-49df-aa67-ddb0cccad7da","Type":"ContainerDied","Data":"8c532f1398195fa6bbe6fc89a430ab8be1b1ae59c91081afe9573fda488d3066"} Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.553510 4793 scope.go:117] "RemoveContainer" containerID="5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.591505 4793 scope.go:117] "RemoveContainer" containerID="7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.615144 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ndrtz"] Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.625947 4793 scope.go:117] "RemoveContainer" containerID="2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.626663 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ndrtz"] Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.684041 4793 scope.go:117] "RemoveContainer" containerID="5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69" Feb 17 23:01:39 crc kubenswrapper[4793]: E0217 23:01:39.684520 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69\": container with ID starting with 5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69 not found: ID does not exist" containerID="5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.684584 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69"} err="failed to get container status \"5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69\": rpc error: code = NotFound desc = could not find container \"5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69\": container with ID starting with 5f463afc79b4e6b9a51217800ae74c4430afa12d55863a2e1d98a2201ee36a69 not found: ID does not exist" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.684627 4793 scope.go:117] "RemoveContainer" containerID="7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce" Feb 17 23:01:39 crc kubenswrapper[4793]: E0217 23:01:39.685378 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce\": container with ID starting with 7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce not found: ID does not exist" containerID="7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.685422 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce"} err="failed to get container status \"7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce\": rpc error: code = NotFound desc = could not find container \"7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce\": container with ID starting with 7d756301d970a4a4a5326f300490f235ba8949b1ae4e243fa4df763db5ac22ce not found: ID does not exist" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.685448 4793 scope.go:117] "RemoveContainer" containerID="2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b" Feb 17 23:01:39 crc kubenswrapper[4793]: E0217 23:01:39.687029 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b\": container with ID starting with 2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b not found: ID does not exist" containerID="2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b" Feb 17 23:01:39 crc kubenswrapper[4793]: I0217 23:01:39.687069 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b"} err="failed to get container status \"2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b\": rpc error: code = NotFound desc = could not find container \"2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b\": container with ID starting with 2725787bc3e7c3c355d63e35efddb93c3ac0d1639332b17210d46cc97f916a0b not found: ID does not exist" Feb 17 23:01:41 crc kubenswrapper[4793]: I0217 23:01:41.551827 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" path="/var/lib/kubelet/pods/4cd5b2e2-84ad-49df-aa67-ddb0cccad7da/volumes" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.028346 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fhhnm/must-gather-kvljh"] Feb 17 23:01:43 crc kubenswrapper[4793]: E0217 23:01:43.028956 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="registry-server" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.028967 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="registry-server" Feb 17 23:01:43 crc kubenswrapper[4793]: E0217 23:01:43.028995 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="extract-content" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.029001 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="extract-content" Feb 17 23:01:43 crc kubenswrapper[4793]: E0217 23:01:43.029013 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="extract-utilities" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.029021 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="extract-utilities" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.029201 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd5b2e2-84ad-49df-aa67-ddb0cccad7da" containerName="registry-server" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.030226 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.039487 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fhhnm/must-gather-kvljh"] Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.041935 4793 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fhhnm"/"default-dockercfg-2g8bd" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.041943 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fhhnm"/"kube-root-ca.crt" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.047276 4793 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fhhnm"/"openshift-service-ca.crt" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.079011 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a50b28bc-a207-47df-9c98-e0552834dd8d-must-gather-output\") pod \"must-gather-kvljh\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.079265 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8g5j\" (UniqueName: \"kubernetes.io/projected/a50b28bc-a207-47df-9c98-e0552834dd8d-kube-api-access-g8g5j\") pod \"must-gather-kvljh\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.181631 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a50b28bc-a207-47df-9c98-e0552834dd8d-must-gather-output\") pod \"must-gather-kvljh\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.181680 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8g5j\" (UniqueName: \"kubernetes.io/projected/a50b28bc-a207-47df-9c98-e0552834dd8d-kube-api-access-g8g5j\") pod \"must-gather-kvljh\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.182268 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a50b28bc-a207-47df-9c98-e0552834dd8d-must-gather-output\") pod \"must-gather-kvljh\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.202083 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8g5j\" (UniqueName: \"kubernetes.io/projected/a50b28bc-a207-47df-9c98-e0552834dd8d-kube-api-access-g8g5j\") pod \"must-gather-kvljh\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.345130 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:01:43 crc kubenswrapper[4793]: I0217 23:01:43.810687 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fhhnm/must-gather-kvljh"] Feb 17 23:01:44 crc kubenswrapper[4793]: I0217 23:01:44.538504 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:01:44 crc kubenswrapper[4793]: E0217 23:01:44.539300 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:01:44 crc kubenswrapper[4793]: I0217 23:01:44.603929 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/must-gather-kvljh" event={"ID":"a50b28bc-a207-47df-9c98-e0552834dd8d","Type":"ContainerStarted","Data":"62a2536fb6d9c7713bbadf5d97968f72e077cc8a48b8a8bab522211affa415f9"} Feb 17 23:01:51 crc kubenswrapper[4793]: I0217 23:01:51.692558 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/must-gather-kvljh" event={"ID":"a50b28bc-a207-47df-9c98-e0552834dd8d","Type":"ContainerStarted","Data":"9436ca5910d6439b47060339b64cdc427709a079e521bb9fb3f67e14855f1db3"} Feb 17 23:01:52 crc kubenswrapper[4793]: I0217 23:01:52.706748 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/must-gather-kvljh" event={"ID":"a50b28bc-a207-47df-9c98-e0552834dd8d","Type":"ContainerStarted","Data":"e8f18f2ba8438d993c9084b6ac8c60c83cc99cac35f9e25e1828a4276b35c6f1"} Feb 17 23:01:52 crc kubenswrapper[4793]: I0217 23:01:52.730420 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fhhnm/must-gather-kvljh" podStartSLOduration=2.390348281 podStartE2EDuration="9.730396473s" podCreationTimestamp="2026-02-17 23:01:43 +0000 UTC" firstStartedPulling="2026-02-17 23:01:43.817093081 +0000 UTC m=+10379.108791392" lastFinishedPulling="2026-02-17 23:01:51.157141263 +0000 UTC m=+10386.448839584" observedRunningTime="2026-02-17 23:01:52.723153764 +0000 UTC m=+10388.014852085" watchObservedRunningTime="2026-02-17 23:01:52.730396473 +0000 UTC m=+10388.022094784" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.774237 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-k88rh"] Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.777197 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.870064 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzt8x\" (UniqueName: \"kubernetes.io/projected/c7a0904b-be27-48af-860a-e37932ff3af8-kube-api-access-pzt8x\") pod \"crc-debug-k88rh\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.870263 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7a0904b-be27-48af-860a-e37932ff3af8-host\") pod \"crc-debug-k88rh\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.971486 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzt8x\" (UniqueName: \"kubernetes.io/projected/c7a0904b-be27-48af-860a-e37932ff3af8-kube-api-access-pzt8x\") pod \"crc-debug-k88rh\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.971603 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7a0904b-be27-48af-860a-e37932ff3af8-host\") pod \"crc-debug-k88rh\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.971713 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7a0904b-be27-48af-860a-e37932ff3af8-host\") pod \"crc-debug-k88rh\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:55 crc kubenswrapper[4793]: I0217 23:01:55.992275 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzt8x\" (UniqueName: \"kubernetes.io/projected/c7a0904b-be27-48af-860a-e37932ff3af8-kube-api-access-pzt8x\") pod \"crc-debug-k88rh\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:56 crc kubenswrapper[4793]: I0217 23:01:56.100197 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:01:56 crc kubenswrapper[4793]: I0217 23:01:56.744180 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" event={"ID":"c7a0904b-be27-48af-860a-e37932ff3af8","Type":"ContainerStarted","Data":"a142004636864a91c950a3e6d46238f1a93d7882dacd6bfa17f0662950710504"} Feb 17 23:01:57 crc kubenswrapper[4793]: I0217 23:01:57.538186 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:01:57 crc kubenswrapper[4793]: E0217 23:01:57.538833 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:02:06 crc kubenswrapper[4793]: I0217 23:02:06.853472 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" event={"ID":"c7a0904b-be27-48af-860a-e37932ff3af8","Type":"ContainerStarted","Data":"2808dc382e8c200412f60c18db0093687452f68a79839964ba87b9a6a3e03d4c"} Feb 17 23:02:06 crc kubenswrapper[4793]: I0217 23:02:06.874352 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" podStartSLOduration=1.947221501 podStartE2EDuration="11.874331224s" podCreationTimestamp="2026-02-17 23:01:55 +0000 UTC" firstStartedPulling="2026-02-17 23:01:56.135035199 +0000 UTC m=+10391.426733510" lastFinishedPulling="2026-02-17 23:02:06.062144922 +0000 UTC m=+10401.353843233" observedRunningTime="2026-02-17 23:02:06.864519392 +0000 UTC m=+10402.156217723" watchObservedRunningTime="2026-02-17 23:02:06.874331224 +0000 UTC m=+10402.166029545" Feb 17 23:02:09 crc kubenswrapper[4793]: I0217 23:02:09.538877 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:02:09 crc kubenswrapper[4793]: E0217 23:02:09.539529 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:02:20 crc kubenswrapper[4793]: I0217 23:02:20.102481 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:02:20 crc kubenswrapper[4793]: I0217 23:02:20.103232 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:02:23 crc kubenswrapper[4793]: I0217 23:02:23.538985 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:02:23 crc kubenswrapper[4793]: E0217 23:02:23.539812 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:02:35 crc kubenswrapper[4793]: I0217 23:02:35.544231 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:02:35 crc kubenswrapper[4793]: E0217 23:02:35.544875 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:02:50 crc kubenswrapper[4793]: I0217 23:02:50.101472 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:02:50 crc kubenswrapper[4793]: I0217 23:02:50.102220 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:02:50 crc kubenswrapper[4793]: I0217 23:02:50.538733 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:02:50 crc kubenswrapper[4793]: E0217 23:02:50.538980 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:02:54 crc kubenswrapper[4793]: I0217 23:02:54.302700 4793 generic.go:334] "Generic (PLEG): container finished" podID="c7a0904b-be27-48af-860a-e37932ff3af8" containerID="2808dc382e8c200412f60c18db0093687452f68a79839964ba87b9a6a3e03d4c" exitCode=0 Feb 17 23:02:54 crc kubenswrapper[4793]: I0217 23:02:54.302907 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" event={"ID":"c7a0904b-be27-48af-860a-e37932ff3af8","Type":"ContainerDied","Data":"2808dc382e8c200412f60c18db0093687452f68a79839964ba87b9a6a3e03d4c"} Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.421591 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.446563 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7a0904b-be27-48af-860a-e37932ff3af8-host\") pod \"c7a0904b-be27-48af-860a-e37932ff3af8\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.446725 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzt8x\" (UniqueName: \"kubernetes.io/projected/c7a0904b-be27-48af-860a-e37932ff3af8-kube-api-access-pzt8x\") pod \"c7a0904b-be27-48af-860a-e37932ff3af8\" (UID: \"c7a0904b-be27-48af-860a-e37932ff3af8\") " Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.446815 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7a0904b-be27-48af-860a-e37932ff3af8-host" (OuterVolumeSpecName: "host") pod "c7a0904b-be27-48af-860a-e37932ff3af8" (UID: "c7a0904b-be27-48af-860a-e37932ff3af8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.447319 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7a0904b-be27-48af-860a-e37932ff3af8-host\") on node \"crc\" DevicePath \"\"" Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.461723 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-k88rh"] Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.461891 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a0904b-be27-48af-860a-e37932ff3af8-kube-api-access-pzt8x" (OuterVolumeSpecName: "kube-api-access-pzt8x") pod "c7a0904b-be27-48af-860a-e37932ff3af8" (UID: "c7a0904b-be27-48af-860a-e37932ff3af8"). InnerVolumeSpecName "kube-api-access-pzt8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.472090 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-k88rh"] Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.548864 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzt8x\" (UniqueName: \"kubernetes.io/projected/c7a0904b-be27-48af-860a-e37932ff3af8-kube-api-access-pzt8x\") on node \"crc\" DevicePath \"\"" Feb 17 23:02:55 crc kubenswrapper[4793]: I0217 23:02:55.548988 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a0904b-be27-48af-860a-e37932ff3af8" path="/var/lib/kubelet/pods/c7a0904b-be27-48af-860a-e37932ff3af8/volumes" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.321956 4793 scope.go:117] "RemoveContainer" containerID="2808dc382e8c200412f60c18db0093687452f68a79839964ba87b9a6a3e03d4c" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.322093 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-k88rh" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.694763 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-sldgs"] Feb 17 23:02:56 crc kubenswrapper[4793]: E0217 23:02:56.695155 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a0904b-be27-48af-860a-e37932ff3af8" containerName="container-00" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.695166 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a0904b-be27-48af-860a-e37932ff3af8" containerName="container-00" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.695464 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a0904b-be27-48af-860a-e37932ff3af8" containerName="container-00" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.696138 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.772296 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9pr\" (UniqueName: \"kubernetes.io/projected/8d7bb677-704c-4626-81fe-e1be70869598-kube-api-access-lt9pr\") pod \"crc-debug-sldgs\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.772743 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d7bb677-704c-4626-81fe-e1be70869598-host\") pod \"crc-debug-sldgs\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.875483 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d7bb677-704c-4626-81fe-e1be70869598-host\") pod \"crc-debug-sldgs\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.875667 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d7bb677-704c-4626-81fe-e1be70869598-host\") pod \"crc-debug-sldgs\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.875723 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9pr\" (UniqueName: \"kubernetes.io/projected/8d7bb677-704c-4626-81fe-e1be70869598-kube-api-access-lt9pr\") pod \"crc-debug-sldgs\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:56 crc kubenswrapper[4793]: I0217 23:02:56.905757 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9pr\" (UniqueName: \"kubernetes.io/projected/8d7bb677-704c-4626-81fe-e1be70869598-kube-api-access-lt9pr\") pod \"crc-debug-sldgs\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:57 crc kubenswrapper[4793]: I0217 23:02:57.023647 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:57 crc kubenswrapper[4793]: I0217 23:02:57.333203 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" event={"ID":"8d7bb677-704c-4626-81fe-e1be70869598","Type":"ContainerStarted","Data":"0af078befb9201fd8e9a6c218b1f5ea85be8044610c6a17d6d9bf9da1f6a9868"} Feb 17 23:02:57 crc kubenswrapper[4793]: I0217 23:02:57.333258 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" event={"ID":"8d7bb677-704c-4626-81fe-e1be70869598","Type":"ContainerStarted","Data":"e4b68d91284a96f67f9e1be0ed212c27800b62eaae43a9eb8a53bcaf94d861a5"} Feb 17 23:02:57 crc kubenswrapper[4793]: I0217 23:02:57.392773 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" podStartSLOduration=1.392759018 podStartE2EDuration="1.392759018s" podCreationTimestamp="2026-02-17 23:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 23:02:57.381189452 +0000 UTC m=+10452.672887763" watchObservedRunningTime="2026-02-17 23:02:57.392759018 +0000 UTC m=+10452.684457329" Feb 17 23:02:58 crc kubenswrapper[4793]: I0217 23:02:58.346524 4793 generic.go:334] "Generic (PLEG): container finished" podID="8d7bb677-704c-4626-81fe-e1be70869598" containerID="0af078befb9201fd8e9a6c218b1f5ea85be8044610c6a17d6d9bf9da1f6a9868" exitCode=0 Feb 17 23:02:58 crc kubenswrapper[4793]: I0217 23:02:58.346844 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" event={"ID":"8d7bb677-704c-4626-81fe-e1be70869598","Type":"ContainerDied","Data":"0af078befb9201fd8e9a6c218b1f5ea85be8044610c6a17d6d9bf9da1f6a9868"} Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.456058 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.523557 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9pr\" (UniqueName: \"kubernetes.io/projected/8d7bb677-704c-4626-81fe-e1be70869598-kube-api-access-lt9pr\") pod \"8d7bb677-704c-4626-81fe-e1be70869598\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.523733 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d7bb677-704c-4626-81fe-e1be70869598-host\") pod \"8d7bb677-704c-4626-81fe-e1be70869598\" (UID: \"8d7bb677-704c-4626-81fe-e1be70869598\") " Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.523794 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d7bb677-704c-4626-81fe-e1be70869598-host" (OuterVolumeSpecName: "host") pod "8d7bb677-704c-4626-81fe-e1be70869598" (UID: "8d7bb677-704c-4626-81fe-e1be70869598"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.524320 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d7bb677-704c-4626-81fe-e1be70869598-host\") on node \"crc\" DevicePath \"\"" Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.540928 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7bb677-704c-4626-81fe-e1be70869598-kube-api-access-lt9pr" (OuterVolumeSpecName: "kube-api-access-lt9pr") pod "8d7bb677-704c-4626-81fe-e1be70869598" (UID: "8d7bb677-704c-4626-81fe-e1be70869598"). InnerVolumeSpecName "kube-api-access-lt9pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.553571 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-sldgs"] Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.564617 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-sldgs"] Feb 17 23:02:59 crc kubenswrapper[4793]: I0217 23:02:59.628008 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9pr\" (UniqueName: \"kubernetes.io/projected/8d7bb677-704c-4626-81fe-e1be70869598-kube-api-access-lt9pr\") on node \"crc\" DevicePath \"\"" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.362565 4793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b68d91284a96f67f9e1be0ed212c27800b62eaae43a9eb8a53bcaf94d861a5" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.362620 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-sldgs" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.746869 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-4hmhl"] Feb 17 23:03:00 crc kubenswrapper[4793]: E0217 23:03:00.748193 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7bb677-704c-4626-81fe-e1be70869598" containerName="container-00" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.748269 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7bb677-704c-4626-81fe-e1be70869598" containerName="container-00" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.748527 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7bb677-704c-4626-81fe-e1be70869598" containerName="container-00" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.749612 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.851513 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbnb\" (UniqueName: \"kubernetes.io/projected/47789a8e-1f13-4a62-9a92-4b7044165e9e-kube-api-access-qxbnb\") pod \"crc-debug-4hmhl\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.852109 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47789a8e-1f13-4a62-9a92-4b7044165e9e-host\") pod \"crc-debug-4hmhl\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.953839 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbnb\" (UniqueName: \"kubernetes.io/projected/47789a8e-1f13-4a62-9a92-4b7044165e9e-kube-api-access-qxbnb\") pod \"crc-debug-4hmhl\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.953938 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47789a8e-1f13-4a62-9a92-4b7044165e9e-host\") pod \"crc-debug-4hmhl\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.954094 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47789a8e-1f13-4a62-9a92-4b7044165e9e-host\") pod \"crc-debug-4hmhl\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:00 crc kubenswrapper[4793]: I0217 23:03:00.980793 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbnb\" (UniqueName: \"kubernetes.io/projected/47789a8e-1f13-4a62-9a92-4b7044165e9e-kube-api-access-qxbnb\") pod \"crc-debug-4hmhl\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:01 crc kubenswrapper[4793]: I0217 23:03:01.078370 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:01 crc kubenswrapper[4793]: W0217 23:03:01.105005 4793 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47789a8e_1f13_4a62_9a92_4b7044165e9e.slice/crio-8dbb97df1f9fc6105a2175b19da2b6a5d97798b67d747b1d9369fd52cc8dd0e8 WatchSource:0}: Error finding container 8dbb97df1f9fc6105a2175b19da2b6a5d97798b67d747b1d9369fd52cc8dd0e8: Status 404 returned error can't find the container with id 8dbb97df1f9fc6105a2175b19da2b6a5d97798b67d747b1d9369fd52cc8dd0e8 Feb 17 23:03:01 crc kubenswrapper[4793]: I0217 23:03:01.379869 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" event={"ID":"47789a8e-1f13-4a62-9a92-4b7044165e9e","Type":"ContainerStarted","Data":"8dbb97df1f9fc6105a2175b19da2b6a5d97798b67d747b1d9369fd52cc8dd0e8"} Feb 17 23:03:01 crc kubenswrapper[4793]: I0217 23:03:01.555993 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7bb677-704c-4626-81fe-e1be70869598" path="/var/lib/kubelet/pods/8d7bb677-704c-4626-81fe-e1be70869598/volumes" Feb 17 23:03:02 crc kubenswrapper[4793]: I0217 23:03:02.396962 4793 generic.go:334] "Generic (PLEG): container finished" podID="47789a8e-1f13-4a62-9a92-4b7044165e9e" containerID="1681e975fc9183c7db80349744475b76b648c238bc017cdfc85dc3fd06eb35ef" exitCode=0 Feb 17 23:03:02 crc kubenswrapper[4793]: I0217 23:03:02.397025 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" event={"ID":"47789a8e-1f13-4a62-9a92-4b7044165e9e","Type":"ContainerDied","Data":"1681e975fc9183c7db80349744475b76b648c238bc017cdfc85dc3fd06eb35ef"} Feb 17 23:03:02 crc kubenswrapper[4793]: I0217 23:03:02.454573 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-4hmhl"] Feb 17 23:03:02 crc kubenswrapper[4793]: I0217 23:03:02.468991 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fhhnm/crc-debug-4hmhl"] Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.500019 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.625781 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbnb\" (UniqueName: \"kubernetes.io/projected/47789a8e-1f13-4a62-9a92-4b7044165e9e-kube-api-access-qxbnb\") pod \"47789a8e-1f13-4a62-9a92-4b7044165e9e\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.626159 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47789a8e-1f13-4a62-9a92-4b7044165e9e-host\") pod \"47789a8e-1f13-4a62-9a92-4b7044165e9e\" (UID: \"47789a8e-1f13-4a62-9a92-4b7044165e9e\") " Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.626287 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47789a8e-1f13-4a62-9a92-4b7044165e9e-host" (OuterVolumeSpecName: "host") pod "47789a8e-1f13-4a62-9a92-4b7044165e9e" (UID: "47789a8e-1f13-4a62-9a92-4b7044165e9e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.626815 4793 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47789a8e-1f13-4a62-9a92-4b7044165e9e-host\") on node \"crc\" DevicePath \"\"" Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.631448 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47789a8e-1f13-4a62-9a92-4b7044165e9e-kube-api-access-qxbnb" (OuterVolumeSpecName: "kube-api-access-qxbnb") pod "47789a8e-1f13-4a62-9a92-4b7044165e9e" (UID: "47789a8e-1f13-4a62-9a92-4b7044165e9e"). InnerVolumeSpecName "kube-api-access-qxbnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:03:03 crc kubenswrapper[4793]: I0217 23:03:03.728730 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbnb\" (UniqueName: \"kubernetes.io/projected/47789a8e-1f13-4a62-9a92-4b7044165e9e-kube-api-access-qxbnb\") on node \"crc\" DevicePath \"\"" Feb 17 23:03:04 crc kubenswrapper[4793]: I0217 23:03:04.412956 4793 scope.go:117] "RemoveContainer" containerID="1681e975fc9183c7db80349744475b76b648c238bc017cdfc85dc3fd06eb35ef" Feb 17 23:03:04 crc kubenswrapper[4793]: I0217 23:03:04.414064 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/crc-debug-4hmhl" Feb 17 23:03:04 crc kubenswrapper[4793]: I0217 23:03:04.539425 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:03:04 crc kubenswrapper[4793]: E0217 23:03:04.539662 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:03:05 crc kubenswrapper[4793]: I0217 23:03:05.549507 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47789a8e-1f13-4a62-9a92-4b7044165e9e" path="/var/lib/kubelet/pods/47789a8e-1f13-4a62-9a92-4b7044165e9e/volumes" Feb 17 23:03:18 crc kubenswrapper[4793]: I0217 23:03:18.538473 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:03:18 crc kubenswrapper[4793]: E0217 23:03:18.539296 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.102318 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.102772 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.102835 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.103919 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.104000 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" gracePeriod=600 Feb 17 23:03:20 crc kubenswrapper[4793]: E0217 23:03:20.243500 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.596473 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" exitCode=0 Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.596579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea"} Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.596922 4793 scope.go:117] "RemoveContainer" containerID="5b0533bbdb20048b64388bd01e5e1fd24c502c1fe2827f5ee4bfdcf9baa7e555" Feb 17 23:03:20 crc kubenswrapper[4793]: I0217 23:03:20.597894 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:03:20 crc kubenswrapper[4793]: E0217 23:03:20.598335 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:03:29 crc kubenswrapper[4793]: I0217 23:03:29.540106 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:03:30 crc kubenswrapper[4793]: I0217 23:03:30.726202 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094"} Feb 17 23:03:32 crc kubenswrapper[4793]: I0217 23:03:32.770713 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" exitCode=1 Feb 17 23:03:32 crc kubenswrapper[4793]: I0217 23:03:32.770849 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094"} Feb 17 23:03:32 crc kubenswrapper[4793]: I0217 23:03:32.771027 4793 scope.go:117] "RemoveContainer" containerID="3c499b10787d9f95e3a30116b75ff55ab6cc73536d83768121848b34428c5871" Feb 17 23:03:32 crc kubenswrapper[4793]: I0217 23:03:32.771655 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:03:32 crc kubenswrapper[4793]: E0217 23:03:32.771929 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:03:35 crc kubenswrapper[4793]: I0217 23:03:35.552887 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:03:35 crc kubenswrapper[4793]: E0217 23:03:35.553990 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:03:35 crc kubenswrapper[4793]: I0217 23:03:35.596788 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:03:35 crc kubenswrapper[4793]: I0217 23:03:35.596886 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 23:03:35 crc kubenswrapper[4793]: I0217 23:03:35.596912 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:03:35 crc kubenswrapper[4793]: I0217 23:03:35.596943 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:03:35 crc kubenswrapper[4793]: I0217 23:03:35.597747 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:03:35 crc kubenswrapper[4793]: E0217 23:03:35.598144 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.057395 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5bc6776d6b-sktrk_5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6/barbican-api/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.217957 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5bc6776d6b-sktrk_5cc271e6-91be-4e95-b9b8-16fc6a5dbeb6/barbican-api-log/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.311174 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5994c4b6d-gdq4r_c3f63aca-a302-4b5a-9aab-df2030cb30a0/barbican-keystone-listener-log/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.313137 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5994c4b6d-gdq4r_c3f63aca-a302-4b5a-9aab-df2030cb30a0/barbican-keystone-listener/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.545407 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-876d8fd55-vqq67_8b6515a5-0b19-48c6-8dce-c3765bbe9087/barbican-worker/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.576191 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-876d8fd55-vqq67_8b6515a5-0b19-48c6-8dce-c3765bbe9087/barbican-worker-log/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.714671 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fgjx4_1d07405f-37a4-410c-b7bf-ab35fc791d56/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.865377 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edb6897b-0bd5-46f3-bba0-fea881577a8f/ceilometer-notification-agent/0.log" Feb 17 23:03:41 crc kubenswrapper[4793]: I0217 23:03:41.921077 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edb6897b-0bd5-46f3-bba0-fea881577a8f/ceilometer-central-agent/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.040094 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edb6897b-0bd5-46f3-bba0-fea881577a8f/proxy-httpd/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.101580 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_edb6897b-0bd5-46f3-bba0-fea881577a8f/sg-core/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.345361 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e9ebb7ca-2c47-4f64-81f5-04d26737ce44/cinder-api-log/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.702091 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f8921b7f-c527-4806-9d5f-16f01ddad8ef/probe/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.704003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_f8921b7f-c527-4806-9d5f-16f01ddad8ef/cinder-backup/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.794948 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e9ebb7ca-2c47-4f64-81f5-04d26737ce44/cinder-api/0.log" Feb 17 23:03:42 crc kubenswrapper[4793]: I0217 23:03:42.921877 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_db029fb2-d204-4d1b-81c8-227c3b8d4a39/cinder-scheduler/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.005304 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_db029fb2-d204-4d1b-81c8-227c3b8d4a39/probe/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.212032 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_d6098983-9a93-4433-9f60-80c300c88a3e/probe/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.230195 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_d6098983-9a93-4433-9f60-80c300c88a3e/cinder-volume/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.428282 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_8a91b201-489b-443b-b3d3-3b435bed899a/cinder-volume/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.498007 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ctf9z_3243a2e1-dbae-438c-934c-6ecb775b33b0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.561525 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_8a91b201-489b-443b-b3d3-3b435bed899a/probe/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.701334 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-sq6dq_2b7973c3-0186-41cc-8641-69e4c0ad60b6/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:43 crc kubenswrapper[4793]: I0217 23:03:43.829439 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77b7b9cf89-q7lv9_413f0882-e8fb-47a6-939a-576f0ccc09f2/init/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.122465 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77b7b9cf89-q7lv9_413f0882-e8fb-47a6-939a-576f0ccc09f2/init/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.271011 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77b7b9cf89-q7lv9_413f0882-e8fb-47a6-939a-576f0ccc09f2/dnsmasq-dns/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.342461 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-m4vkg_ef25c491-c6e8-4fc8-948b-ad2de1484956/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.500084 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_10956ec3-bc51-4d38-82ad-71a60bcf30db/glance-log/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.524127 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_10956ec3-bc51-4d38-82ad-71a60bcf30db/glance-httpd/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.662565 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8a5057fa-97b0-4d24-8002-e2a5b877ef4f/glance-httpd/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.764306 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8a5057fa-97b0-4d24-8002-e2a5b877ef4f/glance-log/0.log" Feb 17 23:03:44 crc kubenswrapper[4793]: I0217 23:03:44.874448 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75c8b5cf48-t8jmz_066a6b1f-85a8-4015-9c17-a9eb27320040/horizon/0.log" Feb 17 23:03:45 crc kubenswrapper[4793]: I0217 23:03:45.048829 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2jbck_5515ee23-b3b8-4450-a64d-62636c818464/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:45 crc kubenswrapper[4793]: I0217 23:03:45.206308 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dh7jc_6398b432-f52f-4de5-b001-8252e5cfaf08/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:45 crc kubenswrapper[4793]: I0217 23:03:45.682929 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79779c64b9-54jgr_a630279e-31d3-4ab4-88f2-a06edcb58dee/keystone-api/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.022077 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522761-9s9zm_d2d6ea04-be94-4620-b1a2-d24eaf65d7a1/keystone-cron/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.063570 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522701-8dn59_0c5dd548-1c99-4d67-aec0-c2f4052aed79/keystone-cron/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.277712 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-75c8b5cf48-t8jmz_066a6b1f-85a8-4015-9c17-a9eb27320040/horizon-log/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.295799 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522821-g8kbk_29467665-4541-4f76-a2bd-c60067bfca4e/keystone-cron/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.366930 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8be5ebd1-58b5-40e0-949f-1479050446e0/kube-state-metrics/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.533607 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-l9shm_46d312cc-dda9-4d5e-bea8-1559405ca6b9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.903076 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d8998fd7c-xvl9z_c09fdef5-1d53-4792-84d2-3bb953383525/neutron-httpd/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.919868 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2vlg7_14f3b5fc-f9bf-4997-8bfe-51df7585da2d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:46 crc kubenswrapper[4793]: I0217 23:03:46.973615 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d8998fd7c-xvl9z_c09fdef5-1d53-4792-84d2-3bb953383525/neutron-api/0.log" Feb 17 23:03:47 crc kubenswrapper[4793]: I0217 23:03:47.540230 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:03:47 crc kubenswrapper[4793]: E0217 23:03:47.540786 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:03:47 crc kubenswrapper[4793]: I0217 23:03:47.956255 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3b77f41b-340b-4eed-a38a-cc9dac77786f/nova-cell0-conductor-conductor/0.log" Feb 17 23:03:48 crc kubenswrapper[4793]: I0217 23:03:48.188645 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d3d13ca2-941b-4abb-a49b-27fb90f34c5e/nova-cell1-conductor-conductor/0.log" Feb 17 23:03:48 crc kubenswrapper[4793]: I0217 23:03:48.401338 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e39ddba3-31ef-4f6e-90d5-67dd54124ba0/nova-api-log/0.log" Feb 17 23:03:48 crc kubenswrapper[4793]: I0217 23:03:48.832371 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gdrlm_31f287e9-abcf-47b4-b249-98613eabec98/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:48 crc kubenswrapper[4793]: I0217 23:03:48.910597 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a996735f-6c26-4277-a888-1431e93e4d9f/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 23:03:48 crc kubenswrapper[4793]: I0217 23:03:48.993843 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_80c301ad-155e-4276-83c4-9f17f530d792/memcached/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.131235 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_99e9323f-09b5-4ee0-a27d-0698f99071bc/nova-metadata-log/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.297837 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e39ddba3-31ef-4f6e-90d5-67dd54124ba0/nova-api-api/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.373046 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_45669327-0649-4fd2-a59f-0e31a4e3cf5a/nova-scheduler-scheduler/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.540807 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:03:49 crc kubenswrapper[4793]: E0217 23:03:49.541067 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.545567 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd/mysql-bootstrap/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.698630 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd/mysql-bootstrap/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.791191 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d42eaf0-230c-4a64-9c52-5c4ea3ea81dd/galera/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.840721 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e7eee19-fd63-4ae8-96d9-9fcd17718b6f/mysql-bootstrap/0.log" Feb 17 23:03:49 crc kubenswrapper[4793]: I0217 23:03:49.990326 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e7eee19-fd63-4ae8-96d9-9fcd17718b6f/mysql-bootstrap/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.049647 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3e7eee19-fd63-4ae8-96d9-9fcd17718b6f/galera/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.086711 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b2bcbf9-1e8c-4593-b391-745b3dbd2ca4/openstackclient/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.325353 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9pnx5_330b8b28-b736-42a9-a430-40d75f6ec12d/ovn-controller/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.369526 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7lrlb_befba025-5e63-4459-9554-215ad72c467a/openstack-network-exporter/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.634179 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hz6qr_61bbd519-85d9-4466-abe0-e4c6664072c5/ovsdb-server-init/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.803293 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hz6qr_61bbd519-85d9-4466-abe0-e4c6664072c5/ovsdb-server-init/0.log" Feb 17 23:03:50 crc kubenswrapper[4793]: I0217 23:03:50.832375 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hz6qr_61bbd519-85d9-4466-abe0-e4c6664072c5/ovsdb-server/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.013559 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-r2zvf_2b16e893-10e7-4207-9ace-bf1d7bd735bf/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.168275 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hz6qr_61bbd519-85d9-4466-abe0-e4c6664072c5/ovs-vswitchd/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.183908 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_225fdbba-d13f-4102-9134-b3f6fef0a08f/ovn-northd/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.229011 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_225fdbba-d13f-4102-9134-b3f6fef0a08f/openstack-network-exporter/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.421169 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd/openstack-network-exporter/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.623298 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3260c410-ab5c-441b-a84d-7b4480d96a17/openstack-network-exporter/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.699191 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0c6a46f-8fbc-46f8-9eed-e31ea220c8dd/ovsdbserver-nb/0.log" Feb 17 23:03:51 crc kubenswrapper[4793]: I0217 23:03:51.700553 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3260c410-ab5c-441b-a84d-7b4480d96a17/ovsdbserver-sb/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.095879 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a335ec7-14e6-40aa-8dfd-56687eed9b84/init-config-reloader/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.171763 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d5f5ff8-cl56f_7a179868-08d0-4c4f-8503-ce054a68e5ee/placement-api/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.181237 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d5f5ff8-cl56f_7a179868-08d0-4c4f-8503-ce054a68e5ee/placement-log/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.317240 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_99e9323f-09b5-4ee0-a27d-0698f99071bc/nova-metadata-metadata/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.348552 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a335ec7-14e6-40aa-8dfd-56687eed9b84/config-reloader/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.399122 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a335ec7-14e6-40aa-8dfd-56687eed9b84/init-config-reloader/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.402921 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a335ec7-14e6-40aa-8dfd-56687eed9b84/prometheus/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.465993 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3a335ec7-14e6-40aa-8dfd-56687eed9b84/thanos-sidecar/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.693656 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_080e91ed-ca1e-4b2a-948a-7f4e5ded62f6/setup-container/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.937119 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_080e91ed-ca1e-4b2a-948a-7f4e5ded62f6/rabbitmq/0.log" Feb 17 23:03:52 crc kubenswrapper[4793]: I0217 23:03:52.962387 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_080e91ed-ca1e-4b2a-948a-7f4e5ded62f6/setup-container/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.010983 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7d868632-904a-4ba2-8d3a-4e3d0d8de4b0/setup-container/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.180414 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7d868632-904a-4ba2-8d3a-4e3d0d8de4b0/setup-container/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.207888 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7d868632-904a-4ba2-8d3a-4e3d0d8de4b0/rabbitmq/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.232330 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bf6d755-b67b-421f-8405-350b53e03a92/setup-container/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.395967 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bf6d755-b67b-421f-8405-350b53e03a92/setup-container/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.427644 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bf6d755-b67b-421f-8405-350b53e03a92/rabbitmq/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.497988 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bs2m2_ba576b4a-10f9-4e61-bb10-0b776f724706/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.618433 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d5hnw_b0a9d8f0-3566-420a-b070-5a86b798dbee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.702284 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-njncr_1be07a95-98d1-4655-a7d8-f851afe8a947/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.783974 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6n5b7_7ce29a8f-d2fd-4315-9264-ba243fba16ee/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:53 crc kubenswrapper[4793]: I0217 23:03:53.948466 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j77qz_b087a913-e45a-4627-a95b-7c2c6edc0f23/ssh-known-hosts-edpm-deployment/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.086794 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f79fc46bf-72hgv_6afa6e3e-f498-4aa6-a646-4b031664608d/proxy-server/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.166075 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bvw9p_3d5841af-e328-4ea6-a184-546676cce0a7/swift-ring-rebalance/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.305573 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f79fc46bf-72hgv_6afa6e3e-f498-4aa6-a646-4b031664608d/proxy-httpd/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.388117 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/account-auditor/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.430620 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/account-reaper/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.528732 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/account-server/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.552519 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/container-auditor/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.561018 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/account-replicator/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.633185 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/container-server/0.log" Feb 17 23:03:54 crc kubenswrapper[4793]: I0217 23:03:54.680577 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/container-replicator/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.118787 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/container-updater/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.133515 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/object-server/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.181579 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/object-replicator/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.183768 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/object-auditor/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.184107 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/object-expirer/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.302181 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/object-updater/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.336197 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/rsync/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.384763 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b1695ca3-290a-44c5-8771-146029a6054a/swift-recon-cron/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.451106 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qflkd_35c9f1ac-d3d4-4bdc-a05e-c3ba2c87c27d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.749506 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_585df3e1-a4cf-4c93-b847-cfc1f6ecd207/test-operator-logs-container/0.log" Feb 17 23:03:55 crc kubenswrapper[4793]: I0217 23:03:55.831237 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7rgf5_31fb2610-de86-45b5-adb3-12f0d23f90bd/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 23:03:56 crc kubenswrapper[4793]: I0217 23:03:56.276555 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_02d26164-0fa4-4020-9224-b7760a490987/watcher-applier/28.log" Feb 17 23:03:56 crc kubenswrapper[4793]: I0217 23:03:56.576225 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_75a21cff-8e4b-4844-8717-b4f483fa282b/tempest-tests-tempest-tests-runner/0.log" Feb 17 23:03:56 crc kubenswrapper[4793]: I0217 23:03:56.684682 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_02d26164-0fa4-4020-9224-b7760a490987/watcher-applier/28.log" Feb 17 23:03:56 crc kubenswrapper[4793]: I0217 23:03:56.871001 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_71c55328-b65e-4471-b5a4-228ae3dbeb8d/watcher-api-log/0.log" Feb 17 23:03:58 crc kubenswrapper[4793]: I0217 23:03:58.733789 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_ebd99c2f-5001-4865-966e-ea11d0dfc392/watcher-decision-engine/0.log" Feb 17 23:03:59 crc kubenswrapper[4793]: I0217 23:03:59.542292 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:03:59 crc kubenswrapper[4793]: E0217 23:03:59.542496 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:04:01 crc kubenswrapper[4793]: I0217 23:04:01.567208 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_71c55328-b65e-4471-b5a4-228ae3dbeb8d/watcher-api/0.log" Feb 17 23:04:03 crc kubenswrapper[4793]: I0217 23:04:03.539123 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:04:03 crc kubenswrapper[4793]: E0217 23:04:03.539596 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:04:14 crc kubenswrapper[4793]: I0217 23:04:14.538970 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:04:14 crc kubenswrapper[4793]: I0217 23:04:14.539582 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:04:14 crc kubenswrapper[4793]: E0217 23:04:14.539801 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:04:14 crc kubenswrapper[4793]: E0217 23:04:14.539998 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:04:25 crc kubenswrapper[4793]: I0217 23:04:25.547157 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:04:25 crc kubenswrapper[4793]: E0217 23:04:25.548232 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.249161 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/util/0.log" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.472489 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/pull/0.log" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.474233 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/util/0.log" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.482752 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/pull/0.log" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.715814 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/extract/0.log" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.747176 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/pull/0.log" Feb 17 23:04:28 crc kubenswrapper[4793]: I0217 23:04:28.752547 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_182d2f8e8540ed2eb09d52fdb622813ecdf564ed0068dbd852370ac6c3zcm5s_0604f563-d59c-490e-8c54-749fb46ae122/util/0.log" Feb 17 23:04:29 crc kubenswrapper[4793]: I0217 23:04:29.207511 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-h4k6c_5805376c-5192-4fc2-a3b0-64eae2eaf7a1/manager/0.log" Feb 17 23:04:29 crc kubenswrapper[4793]: I0217 23:04:29.320285 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-stsjq_c5356f69-6595-4df6-804c-0bdc507e635a/manager/0.log" Feb 17 23:04:29 crc kubenswrapper[4793]: I0217 23:04:29.539821 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:04:29 crc kubenswrapper[4793]: E0217 23:04:29.540052 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:04:29 crc kubenswrapper[4793]: I0217 23:04:29.602176 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-n5b9b_6dfe3dba-f71d-4de0-9e38-8d0a38c7f272/manager/0.log" Feb 17 23:04:29 crc kubenswrapper[4793]: I0217 23:04:29.629042 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-nvq46_94f226e8-038f-42f0-8ead-687af3df6d2b/manager/0.log" Feb 17 23:04:29 crc kubenswrapper[4793]: I0217 23:04:29.860531 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-6spqn_a0c27d33-e835-4fe9-92e4-7846bb169b1a/manager/0.log" Feb 17 23:04:30 crc kubenswrapper[4793]: I0217 23:04:30.139118 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-wwhj6_85645037-549b-483a-a0db-76649c2e9d0f/manager/0.log" Feb 17 23:04:30 crc kubenswrapper[4793]: I0217 23:04:30.423817 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-wxb6w_d0cd31cb-0cbc-414c-a20b-f6c38256f347/manager/0.log" Feb 17 23:04:30 crc kubenswrapper[4793]: I0217 23:04:30.560278 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-mnpjn_50be19a7-d8db-4c5c-8966-825c6d3310c1/manager/0.log" Feb 17 23:04:30 crc kubenswrapper[4793]: I0217 23:04:30.745883 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-qjv29_df82a1fe-a387-45c4-bad1-d2fca982baaa/manager/0.log" Feb 17 23:04:31 crc kubenswrapper[4793]: I0217 23:04:31.009178 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-kb62n_8eeed758-99d2-46c6-be0f-381d8eea293a/manager/0.log" Feb 17 23:04:31 crc kubenswrapper[4793]: I0217 23:04:31.252897 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-m46dz_0b1ec5e9-161a-403b-bedd-b6c5823180db/manager/0.log" Feb 17 23:04:31 crc kubenswrapper[4793]: I0217 23:04:31.469862 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-lnvx2_2d46ef99-7efb-4cb1-8970-10cc188a3bb2/manager/0.log" Feb 17 23:04:31 crc kubenswrapper[4793]: I0217 23:04:31.478875 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-drgj7_419d7c26-e4a2-44ef-8d24-4dc5c6a2db7f/manager/0.log" Feb 17 23:04:31 crc kubenswrapper[4793]: I0217 23:04:31.657091 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cpvd9d_95cbe9b0-bd61-493b-a8b2-f5d70b515ed7/manager/0.log" Feb 17 23:04:32 crc kubenswrapper[4793]: I0217 23:04:32.083007 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5dcc78df94-w8mdz_4f6459df-98f3-4dc2-92ef-6cc4b7739fb3/operator/0.log" Feb 17 23:04:32 crc kubenswrapper[4793]: I0217 23:04:32.453716 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-f62m6_670203de-1f33-4e1a-8b20-db77984be713/registry-server/0.log" Feb 17 23:04:33 crc kubenswrapper[4793]: I0217 23:04:33.250768 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-k6mbs_f641768c-418b-46cd-83be-16acad24aa35/manager/0.log" Feb 17 23:04:33 crc kubenswrapper[4793]: I0217 23:04:33.339501 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-n6jbv_c0c6fc5a-a96e-4165-a9b9-a4ae7b099216/manager/0.log" Feb 17 23:04:33 crc kubenswrapper[4793]: I0217 23:04:33.621269 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7gmkz_a6f00eb1-a1d7-48a6-ab40-e2dfc33769e0/operator/0.log" Feb 17 23:04:33 crc kubenswrapper[4793]: I0217 23:04:33.790684 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-4qdc2_53c19b0b-735c-46ca-99f1-4be322dddb16/manager/0.log" Feb 17 23:04:34 crc kubenswrapper[4793]: I0217 23:04:34.067430 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-9kgb7_130aaa99-ec87-4c6d-8eaa-e553872e6df8/manager/0.log" Feb 17 23:04:34 crc kubenswrapper[4793]: I0217 23:04:34.073811 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-ccd97cd7c-w9x9r_db37e414-596f-4c82-9b3b-f7c08820df82/manager/0.log" Feb 17 23:04:34 crc kubenswrapper[4793]: I0217 23:04:34.182979 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-4v2ch_00b83896-fe8e-49bc-b762-6dfb14777fd7/manager/0.log" Feb 17 23:04:34 crc kubenswrapper[4793]: I0217 23:04:34.588189 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-jrjzp_6ae6b399-39bc-446a-af92-4d4b7fc18361/manager/0.log" Feb 17 23:04:35 crc kubenswrapper[4793]: I0217 23:04:35.040531 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-95859678f-wk66s_10ee5096-1369-4d4f-90ed-d4160f7a2aa4/manager/0.log" Feb 17 23:04:38 crc kubenswrapper[4793]: I0217 23:04:38.539552 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:04:38 crc kubenswrapper[4793]: E0217 23:04:38.540253 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:04:43 crc kubenswrapper[4793]: I0217 23:04:43.539343 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:04:43 crc kubenswrapper[4793]: E0217 23:04:43.540164 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:04:51 crc kubenswrapper[4793]: I0217 23:04:51.538652 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:04:51 crc kubenswrapper[4793]: E0217 23:04:51.539425 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:04:54 crc kubenswrapper[4793]: I0217 23:04:54.539609 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:04:54 crc kubenswrapper[4793]: E0217 23:04:54.540387 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:04:57 crc kubenswrapper[4793]: I0217 23:04:57.196552 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x9z7w_6bf6970b-ae08-4f63-b5e0-e0bd6ff1ea65/control-plane-machine-set-operator/0.log" Feb 17 23:04:57 crc kubenswrapper[4793]: I0217 23:04:57.374459 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lzd99_881cfa9d-a35a-4088-8a39-b4ecd52a3b37/kube-rbac-proxy/0.log" Feb 17 23:04:57 crc kubenswrapper[4793]: I0217 23:04:57.414828 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lzd99_881cfa9d-a35a-4088-8a39-b4ecd52a3b37/machine-api-operator/0.log" Feb 17 23:05:04 crc kubenswrapper[4793]: I0217 23:05:04.539559 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:05:04 crc kubenswrapper[4793]: E0217 23:05:04.540396 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:05:08 crc kubenswrapper[4793]: I0217 23:05:08.538771 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:05:08 crc kubenswrapper[4793]: E0217 23:05:08.539732 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.130216 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7pqcf"] Feb 17 23:05:11 crc kubenswrapper[4793]: E0217 23:05:11.131014 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47789a8e-1f13-4a62-9a92-4b7044165e9e" containerName="container-00" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.131027 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="47789a8e-1f13-4a62-9a92-4b7044165e9e" containerName="container-00" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.131226 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="47789a8e-1f13-4a62-9a92-4b7044165e9e" containerName="container-00" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.132593 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.144379 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pqcf"] Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.295177 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-utilities\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.295231 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-catalog-content\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.295365 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn45f\" (UniqueName: \"kubernetes.io/projected/755537d9-5fb7-4885-8388-40fe806664cb-kube-api-access-sn45f\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.351384 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mdbrh_7b6c2637-dc01-4efd-ab65-36f8b0b4ff6d/cert-manager-controller/0.log" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.396939 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-utilities\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.396985 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-catalog-content\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.397085 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn45f\" (UniqueName: \"kubernetes.io/projected/755537d9-5fb7-4885-8388-40fe806664cb-kube-api-access-sn45f\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.397509 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-utilities\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.397677 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-catalog-content\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.419618 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn45f\" (UniqueName: \"kubernetes.io/projected/755537d9-5fb7-4885-8388-40fe806664cb-kube-api-access-sn45f\") pod \"redhat-marketplace-7pqcf\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.458309 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.490006 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wd49b_bfcb2cee-b5a8-43c6-adf3-ef142c9d7427/cert-manager-cainjector/0.log" Feb 17 23:05:11 crc kubenswrapper[4793]: I0217 23:05:11.739309 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-6mcc4_7e87d992-edd2-4e13-a457-63ee57c8db27/cert-manager-webhook/0.log" Feb 17 23:05:12 crc kubenswrapper[4793]: I0217 23:05:12.000581 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pqcf"] Feb 17 23:05:12 crc kubenswrapper[4793]: I0217 23:05:12.687237 4793 generic.go:334] "Generic (PLEG): container finished" podID="755537d9-5fb7-4885-8388-40fe806664cb" containerID="cfd60dfd55dc622fa1ed6429f00e78e2a8ecddbe86b3904a05f15111ab6846eb" exitCode=0 Feb 17 23:05:12 crc kubenswrapper[4793]: I0217 23:05:12.687304 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerDied","Data":"cfd60dfd55dc622fa1ed6429f00e78e2a8ecddbe86b3904a05f15111ab6846eb"} Feb 17 23:05:12 crc kubenswrapper[4793]: I0217 23:05:12.687600 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerStarted","Data":"f81f5331c6ae7b32353ff5f398f81adad0a211bd78b5b2a30e337fdb03497d93"} Feb 17 23:05:13 crc kubenswrapper[4793]: I0217 23:05:13.699653 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerStarted","Data":"a8e938581bf02fa9fd38b315a9875f923b37b5a94dbbfa6ade01f779893008b2"} Feb 17 23:05:15 crc kubenswrapper[4793]: I0217 23:05:15.738256 4793 generic.go:334] "Generic (PLEG): container finished" podID="755537d9-5fb7-4885-8388-40fe806664cb" containerID="a8e938581bf02fa9fd38b315a9875f923b37b5a94dbbfa6ade01f779893008b2" exitCode=0 Feb 17 23:05:15 crc kubenswrapper[4793]: I0217 23:05:15.738321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerDied","Data":"a8e938581bf02fa9fd38b315a9875f923b37b5a94dbbfa6ade01f779893008b2"} Feb 17 23:05:16 crc kubenswrapper[4793]: I0217 23:05:16.752914 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerStarted","Data":"35404ced40dfbca9e4c39672f7c3584b943d43945e30bcbcddbd35e7d0821834"} Feb 17 23:05:16 crc kubenswrapper[4793]: I0217 23:05:16.780748 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7pqcf" podStartSLOduration=2.325400688 podStartE2EDuration="5.780724706s" podCreationTimestamp="2026-02-17 23:05:11 +0000 UTC" firstStartedPulling="2026-02-17 23:05:12.689899061 +0000 UTC m=+10587.981597372" lastFinishedPulling="2026-02-17 23:05:16.145223079 +0000 UTC m=+10591.436921390" observedRunningTime="2026-02-17 23:05:16.77805113 +0000 UTC m=+10592.069749451" watchObservedRunningTime="2026-02-17 23:05:16.780724706 +0000 UTC m=+10592.072423037" Feb 17 23:05:19 crc kubenswrapper[4793]: I0217 23:05:19.539477 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:05:19 crc kubenswrapper[4793]: E0217 23:05:19.541450 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:05:21 crc kubenswrapper[4793]: I0217 23:05:21.459587 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:21 crc kubenswrapper[4793]: I0217 23:05:21.460012 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:22 crc kubenswrapper[4793]: I0217 23:05:22.444897 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:22 crc kubenswrapper[4793]: I0217 23:05:22.534679 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:23 crc kubenswrapper[4793]: I0217 23:05:23.539544 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:05:23 crc kubenswrapper[4793]: E0217 23:05:23.541994 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:05:25 crc kubenswrapper[4793]: I0217 23:05:25.527475 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pqcf"] Feb 17 23:05:25 crc kubenswrapper[4793]: I0217 23:05:25.528394 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7pqcf" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="registry-server" containerID="cri-o://35404ced40dfbca9e4c39672f7c3584b943d43945e30bcbcddbd35e7d0821834" gracePeriod=2 Feb 17 23:05:25 crc kubenswrapper[4793]: I0217 23:05:25.857133 4793 generic.go:334] "Generic (PLEG): container finished" podID="755537d9-5fb7-4885-8388-40fe806664cb" containerID="35404ced40dfbca9e4c39672f7c3584b943d43945e30bcbcddbd35e7d0821834" exitCode=0 Feb 17 23:05:25 crc kubenswrapper[4793]: I0217 23:05:25.857179 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerDied","Data":"35404ced40dfbca9e4c39672f7c3584b943d43945e30bcbcddbd35e7d0821834"} Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.086208 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.180521 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-utilities\") pod \"755537d9-5fb7-4885-8388-40fe806664cb\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.180801 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn45f\" (UniqueName: \"kubernetes.io/projected/755537d9-5fb7-4885-8388-40fe806664cb-kube-api-access-sn45f\") pod \"755537d9-5fb7-4885-8388-40fe806664cb\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.180866 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-catalog-content\") pod \"755537d9-5fb7-4885-8388-40fe806664cb\" (UID: \"755537d9-5fb7-4885-8388-40fe806664cb\") " Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.181254 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-utilities" (OuterVolumeSpecName: "utilities") pod "755537d9-5fb7-4885-8388-40fe806664cb" (UID: "755537d9-5fb7-4885-8388-40fe806664cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.181373 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.186259 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755537d9-5fb7-4885-8388-40fe806664cb-kube-api-access-sn45f" (OuterVolumeSpecName: "kube-api-access-sn45f") pod "755537d9-5fb7-4885-8388-40fe806664cb" (UID: "755537d9-5fb7-4885-8388-40fe806664cb"). InnerVolumeSpecName "kube-api-access-sn45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.214950 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "755537d9-5fb7-4885-8388-40fe806664cb" (UID: "755537d9-5fb7-4885-8388-40fe806664cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.283310 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn45f\" (UniqueName: \"kubernetes.io/projected/755537d9-5fb7-4885-8388-40fe806664cb-kube-api-access-sn45f\") on node \"crc\" DevicePath \"\"" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.283350 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755537d9-5fb7-4885-8388-40fe806664cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.732895 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-n9dsm_790088e3-c7ca-480c-883d-c3b9e2b4c8c8/nmstate-console-plugin/0.log" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.868212 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pqcf" event={"ID":"755537d9-5fb7-4885-8388-40fe806664cb","Type":"ContainerDied","Data":"f81f5331c6ae7b32353ff5f398f81adad0a211bd78b5b2a30e337fdb03497d93"} Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.868280 4793 scope.go:117] "RemoveContainer" containerID="35404ced40dfbca9e4c39672f7c3584b943d43945e30bcbcddbd35e7d0821834" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.868276 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pqcf" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.886913 4793 scope.go:117] "RemoveContainer" containerID="a8e938581bf02fa9fd38b315a9875f923b37b5a94dbbfa6ade01f779893008b2" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.909844 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pqcf"] Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.910518 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pjstr_6316cf0c-93bc-4219-9db0-0e81b81b8add/nmstate-handler/0.log" Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.922356 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pqcf"] Feb 17 23:05:26 crc kubenswrapper[4793]: I0217 23:05:26.924761 4793 scope.go:117] "RemoveContainer" containerID="cfd60dfd55dc622fa1ed6429f00e78e2a8ecddbe86b3904a05f15111ab6846eb" Feb 17 23:05:27 crc kubenswrapper[4793]: I0217 23:05:27.011521 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-84b4b_3b8a3433-034d-44e7-bb2d-347120fd762a/kube-rbac-proxy/0.log" Feb 17 23:05:27 crc kubenswrapper[4793]: I0217 23:05:27.066559 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-84b4b_3b8a3433-034d-44e7-bb2d-347120fd762a/nmstate-metrics/0.log" Feb 17 23:05:27 crc kubenswrapper[4793]: I0217 23:05:27.205557 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-7v452_43cb2714-f3fe-442d-8a64-37f7c77bdb3b/nmstate-operator/0.log" Feb 17 23:05:27 crc kubenswrapper[4793]: I0217 23:05:27.225716 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-smppl_2db5cb32-7490-4b5f-b6c8-11db2f5c7d04/nmstate-webhook/0.log" Feb 17 23:05:27 crc kubenswrapper[4793]: I0217 23:05:27.552355 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755537d9-5fb7-4885-8388-40fe806664cb" path="/var/lib/kubelet/pods/755537d9-5fb7-4885-8388-40fe806664cb/volumes" Feb 17 23:05:32 crc kubenswrapper[4793]: I0217 23:05:32.538958 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:05:32 crc kubenswrapper[4793]: E0217 23:05:32.539703 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.345815 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-whf97"] Feb 17 23:05:39 crc kubenswrapper[4793]: E0217 23:05:39.347055 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="extract-utilities" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.347077 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="extract-utilities" Feb 17 23:05:39 crc kubenswrapper[4793]: E0217 23:05:39.347165 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="extract-content" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.347182 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="extract-content" Feb 17 23:05:39 crc kubenswrapper[4793]: E0217 23:05:39.348494 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="registry-server" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.348533 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="registry-server" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.349000 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="755537d9-5fb7-4885-8388-40fe806664cb" containerName="registry-server" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.351573 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.355488 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whf97"] Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.468964 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-utilities\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.469137 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-catalog-content\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.469195 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksc4\" (UniqueName: \"kubernetes.io/projected/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-kube-api-access-tksc4\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.542613 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:05:39 crc kubenswrapper[4793]: E0217 23:05:39.543028 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.570661 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-catalog-content\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.570994 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tksc4\" (UniqueName: \"kubernetes.io/projected/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-kube-api-access-tksc4\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.571067 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-utilities\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.571203 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-catalog-content\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.571442 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-utilities\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.600742 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tksc4\" (UniqueName: \"kubernetes.io/projected/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-kube-api-access-tksc4\") pod \"redhat-operators-whf97\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:39 crc kubenswrapper[4793]: I0217 23:05:39.687373 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:40 crc kubenswrapper[4793]: I0217 23:05:40.152187 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whf97"] Feb 17 23:05:41 crc kubenswrapper[4793]: I0217 23:05:41.052484 4793 generic.go:334] "Generic (PLEG): container finished" podID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerID="ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f" exitCode=0 Feb 17 23:05:41 crc kubenswrapper[4793]: I0217 23:05:41.052572 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerDied","Data":"ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f"} Feb 17 23:05:41 crc kubenswrapper[4793]: I0217 23:05:41.052823 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerStarted","Data":"d0e30f643360a00fe3865fda8b1ec575a759725cca13f1574bfdcdfe34ec75da"} Feb 17 23:05:42 crc kubenswrapper[4793]: I0217 23:05:42.063164 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerStarted","Data":"3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534"} Feb 17 23:05:43 crc kubenswrapper[4793]: I0217 23:05:43.710464 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-stbxd_ee3dbdac-0635-42f6-909e-3ff0ca3f48f7/prometheus-operator/0.log" Feb 17 23:05:43 crc kubenswrapper[4793]: I0217 23:05:43.851002 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt_ae7faf0a-a2b4-431b-8a63-4b890b5f5c73/prometheus-operator-admission-webhook/0.log" Feb 17 23:05:43 crc kubenswrapper[4793]: I0217 23:05:43.895758 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4_1321bdf1-43e6-45ce-8d74-332b6d81c908/prometheus-operator-admission-webhook/0.log" Feb 17 23:05:44 crc kubenswrapper[4793]: I0217 23:05:44.043133 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zbtjj_a2358cc0-c59d-4feb-9682-b6dbfc729cd8/operator/0.log" Feb 17 23:05:44 crc kubenswrapper[4793]: I0217 23:05:44.091555 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jqgbz_b19f5d08-b87f-4168-b29b-b28619987367/perses-operator/0.log" Feb 17 23:05:46 crc kubenswrapper[4793]: I0217 23:05:46.098660 4793 generic.go:334] "Generic (PLEG): container finished" podID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerID="3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534" exitCode=0 Feb 17 23:05:46 crc kubenswrapper[4793]: I0217 23:05:46.098733 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerDied","Data":"3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534"} Feb 17 23:05:46 crc kubenswrapper[4793]: I0217 23:05:46.538332 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:05:46 crc kubenswrapper[4793]: E0217 23:05:46.538931 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:05:47 crc kubenswrapper[4793]: I0217 23:05:47.114579 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerStarted","Data":"7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935"} Feb 17 23:05:47 crc kubenswrapper[4793]: I0217 23:05:47.150037 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-whf97" podStartSLOduration=2.684077468 podStartE2EDuration="8.150018639s" podCreationTimestamp="2026-02-17 23:05:39 +0000 UTC" firstStartedPulling="2026-02-17 23:05:41.055260066 +0000 UTC m=+10616.346958417" lastFinishedPulling="2026-02-17 23:05:46.521201267 +0000 UTC m=+10621.812899588" observedRunningTime="2026-02-17 23:05:47.138939445 +0000 UTC m=+10622.430637766" watchObservedRunningTime="2026-02-17 23:05:47.150018639 +0000 UTC m=+10622.441716960" Feb 17 23:05:49 crc kubenswrapper[4793]: I0217 23:05:49.688367 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:49 crc kubenswrapper[4793]: I0217 23:05:49.688875 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:05:50 crc kubenswrapper[4793]: I0217 23:05:50.750246 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whf97" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" probeResult="failure" output=< Feb 17 23:05:50 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 23:05:50 crc kubenswrapper[4793]: > Feb 17 23:05:51 crc kubenswrapper[4793]: I0217 23:05:51.539418 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:05:51 crc kubenswrapper[4793]: E0217 23:05:51.539822 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.462648 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jxxkv_1d17845a-89d6-405a-8125-0b326ec8894b/kube-rbac-proxy/0.log" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.539530 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:05:59 crc kubenswrapper[4793]: E0217 23:05:59.539808 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.610057 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jxxkv_1d17845a-89d6-405a-8125-0b326ec8894b/controller/0.log" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.777744 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-frr-files/0.log" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.909476 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-frr-files/0.log" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.910200 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-reloader/0.log" Feb 17 23:05:59 crc kubenswrapper[4793]: I0217 23:05:59.955266 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-metrics/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.017836 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-reloader/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.134421 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-reloader/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.135467 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-frr-files/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.136668 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-metrics/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.211657 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-metrics/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.374096 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-frr-files/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.397989 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/controller/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.420158 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-reloader/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.421066 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/cp-metrics/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.570754 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/frr-metrics/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.591971 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/kube-rbac-proxy/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.663449 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/kube-rbac-proxy-frr/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.741385 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whf97" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" probeResult="failure" output=< Feb 17 23:06:00 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 23:06:00 crc kubenswrapper[4793]: > Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.791414 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/reloader/0.log" Feb 17 23:06:00 crc kubenswrapper[4793]: I0217 23:06:00.927835 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-hs2fp_5c5f62ed-1a04-409e-93ca-5c8631fe51f4/frr-k8s-webhook-server/0.log" Feb 17 23:06:01 crc kubenswrapper[4793]: I0217 23:06:01.087204 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5468c6fb5b-mh429_9c30e939-99b4-4e6a-88d5-1d4149921d6d/manager/0.log" Feb 17 23:06:01 crc kubenswrapper[4793]: I0217 23:06:01.287546 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-789f66767d-5rsvq_7c340308-e5a2-40db-855c-4dab55f7f028/webhook-server/0.log" Feb 17 23:06:01 crc kubenswrapper[4793]: I0217 23:06:01.458649 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5qqnc_a9138cd5-387e-43a3-bea6-17eb931827ef/kube-rbac-proxy/0.log" Feb 17 23:06:02 crc kubenswrapper[4793]: I0217 23:06:02.085357 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5qqnc_a9138cd5-387e-43a3-bea6-17eb931827ef/speaker/0.log" Feb 17 23:06:02 crc kubenswrapper[4793]: I0217 23:06:02.540226 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:06:02 crc kubenswrapper[4793]: E0217 23:06:02.540925 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:06:02 crc kubenswrapper[4793]: I0217 23:06:02.563710 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2qxkk_e40cbcec-a6c5-40c2-9e5b-651065d296fc/frr/0.log" Feb 17 23:06:10 crc kubenswrapper[4793]: I0217 23:06:10.752814 4793 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whf97" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" probeResult="failure" output=< Feb 17 23:06:10 crc kubenswrapper[4793]: timeout: failed to connect service ":50051" within 1s Feb 17 23:06:10 crc kubenswrapper[4793]: > Feb 17 23:06:13 crc kubenswrapper[4793]: I0217 23:06:13.538407 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:06:13 crc kubenswrapper[4793]: I0217 23:06:13.540165 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:06:13 crc kubenswrapper[4793]: E0217 23:06:13.540348 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:06:13 crc kubenswrapper[4793]: E0217 23:06:13.540855 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.023305 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/util/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.259545 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/util/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.272779 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/pull/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.370003 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/pull/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.549178 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/extract/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.550417 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/pull/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.557456 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l57fw_c2683489-1059-4357-b2c3-832f83aae83e/util/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.774183 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/util/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.995653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/pull/0.log" Feb 17 23:06:18 crc kubenswrapper[4793]: I0217 23:06:18.995916 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/pull/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.015425 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/util/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.197116 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/util/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.206555 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/extract/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.215807 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213wfllp_68641a11-d218-43ed-8060-f78590d33051/pull/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.386617 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/extract-utilities/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.732396 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.766926 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/extract-utilities/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.774334 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.826163 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/extract-content/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.843090 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/extract-content/0.log" Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.971716 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whf97"] Feb 17 23:06:19 crc kubenswrapper[4793]: I0217 23:06:19.996623 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/extract-utilities/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.000599 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/extract-content/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.191019 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/extract-utilities/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.477258 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/extract-utilities/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.491935 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/extract-content/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.554274 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/extract-content/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.707164 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5mqmr_8062b712-7bb0-4d33-9b1d-ca342eb7971f/registry-server/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.796953 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/extract-utilities/0.log" Feb 17 23:06:20 crc kubenswrapper[4793]: I0217 23:06:20.808412 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/extract-content/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.001700 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/util/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.322563 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/pull/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.325630 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/pull/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.409113 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/util/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.485869 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-whf97" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" containerID="cri-o://7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935" gracePeriod=2 Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.610459 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/util/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.709371 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/pull/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.789859 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca558pk_2db28c86-e6a9-42bd-a454-848e8460fd0c/extract/0.log" Feb 17 23:06:21 crc kubenswrapper[4793]: I0217 23:06:21.941965 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dd764_9e76cf5c-f36b-4e98-8e69-f40f066ba874/marketplace-operator/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.085819 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.172039 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tksc4\" (UniqueName: \"kubernetes.io/projected/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-kube-api-access-tksc4\") pod \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.172547 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-utilities\") pod \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.172670 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-catalog-content\") pod \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\" (UID: \"5ef1293c-b679-4bf8-851a-fa5a95e40d5e\") " Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.173044 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-utilities" (OuterVolumeSpecName: "utilities") pod "5ef1293c-b679-4bf8-851a-fa5a95e40d5e" (UID: "5ef1293c-b679-4bf8-851a-fa5a95e40d5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.175090 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.178939 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-kube-api-access-tksc4" (OuterVolumeSpecName: "kube-api-access-tksc4") pod "5ef1293c-b679-4bf8-851a-fa5a95e40d5e" (UID: "5ef1293c-b679-4bf8-851a-fa5a95e40d5e"). InnerVolumeSpecName "kube-api-access-tksc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.217315 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zcw6z_47b71db7-a3b0-4b59-97ed-5d99367b7241/registry-server/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.224851 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/extract-utilities/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.277235 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tksc4\" (UniqueName: \"kubernetes.io/projected/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-kube-api-access-tksc4\") on node \"crc\" DevicePath \"\"" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.316726 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/extract-utilities/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.320216 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/extract-content/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.323188 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ef1293c-b679-4bf8-851a-fa5a95e40d5e" (UID: "5ef1293c-b679-4bf8-851a-fa5a95e40d5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.377917 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/extract-content/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.380049 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef1293c-b679-4bf8-851a-fa5a95e40d5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.502964 4793 generic.go:334] "Generic (PLEG): container finished" podID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerID="7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935" exitCode=0 Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.503001 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerDied","Data":"7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935"} Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.503028 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whf97" event={"ID":"5ef1293c-b679-4bf8-851a-fa5a95e40d5e","Type":"ContainerDied","Data":"d0e30f643360a00fe3865fda8b1ec575a759725cca13f1574bfdcdfe34ec75da"} Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.503048 4793 scope.go:117] "RemoveContainer" containerID="7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.503115 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whf97" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.514256 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/extract-utilities/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.541246 4793 scope.go:117] "RemoveContainer" containerID="3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.545040 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whf97"] Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.556827 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-whf97"] Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.562476 4793 scope.go:117] "RemoveContainer" containerID="ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.593439 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/extract-content/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.624577 4793 scope.go:117] "RemoveContainer" containerID="7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935" Feb 17 23:06:22 crc kubenswrapper[4793]: E0217 23:06:22.625078 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935\": container with ID starting with 7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935 not found: ID does not exist" containerID="7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.625120 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935"} err="failed to get container status \"7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935\": rpc error: code = NotFound desc = could not find container \"7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935\": container with ID starting with 7fb3433172834c00f413d683fd286ccd84f2e01e83be8586c0b535be58455935 not found: ID does not exist" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.625145 4793 scope.go:117] "RemoveContainer" containerID="3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534" Feb 17 23:06:22 crc kubenswrapper[4793]: E0217 23:06:22.625514 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534\": container with ID starting with 3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534 not found: ID does not exist" containerID="3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.625543 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534"} err="failed to get container status \"3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534\": rpc error: code = NotFound desc = could not find container \"3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534\": container with ID starting with 3b5bf012053e6e466c0d5e737b0717ed377cbfbcb2aaa8476dce66481c8fe534 not found: ID does not exist" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.625566 4793 scope.go:117] "RemoveContainer" containerID="ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f" Feb 17 23:06:22 crc kubenswrapper[4793]: E0217 23:06:22.626020 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f\": container with ID starting with ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f not found: ID does not exist" containerID="ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.626044 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f"} err="failed to get container status \"ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f\": rpc error: code = NotFound desc = could not find container \"ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f\": container with ID starting with ad5374c9f0b3fdb3fcc03d7262bff280611ff096c78848e4019fb7d86cdc650f not found: ID does not exist" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.652271 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/extract-utilities/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.949566 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/extract-utilities/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.957832 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/extract-content/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.982115 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/extract-content/0.log" Feb 17 23:06:22 crc kubenswrapper[4793]: I0217 23:06:22.984794 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-thhdr_f5f7099c-f255-4171-b71b-9ad5864a4230/registry-server/0.log" Feb 17 23:06:23 crc kubenswrapper[4793]: I0217 23:06:23.104151 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/extract-content/0.log" Feb 17 23:06:23 crc kubenswrapper[4793]: I0217 23:06:23.168161 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/extract-utilities/0.log" Feb 17 23:06:23 crc kubenswrapper[4793]: I0217 23:06:23.553379 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" path="/var/lib/kubelet/pods/5ef1293c-b679-4bf8-851a-fa5a95e40d5e/volumes" Feb 17 23:06:24 crc kubenswrapper[4793]: I0217 23:06:24.218180 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-flnm7_c15a296a-5abb-4392-b18b-b63e0a67a8c7/registry-server/0.log" Feb 17 23:06:24 crc kubenswrapper[4793]: I0217 23:06:24.540918 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:06:24 crc kubenswrapper[4793]: E0217 23:06:24.541218 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:06:26 crc kubenswrapper[4793]: I0217 23:06:26.539321 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:06:26 crc kubenswrapper[4793]: E0217 23:06:26.540308 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:06:38 crc kubenswrapper[4793]: I0217 23:06:38.539390 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:06:38 crc kubenswrapper[4793]: E0217 23:06:38.539994 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:06:40 crc kubenswrapper[4793]: I0217 23:06:40.054162 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-stbxd_ee3dbdac-0635-42f6-909e-3ff0ca3f48f7/prometheus-operator/0.log" Feb 17 23:06:40 crc kubenswrapper[4793]: I0217 23:06:40.089653 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c849c4c7f-l8dpt_ae7faf0a-a2b4-431b-8a63-4b890b5f5c73/prometheus-operator-admission-webhook/0.log" Feb 17 23:06:40 crc kubenswrapper[4793]: I0217 23:06:40.117019 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c849c4c7f-sd2r4_1321bdf1-43e6-45ce-8d74-332b6d81c908/prometheus-operator-admission-webhook/0.log" Feb 17 23:06:40 crc kubenswrapper[4793]: I0217 23:06:40.242156 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zbtjj_a2358cc0-c59d-4feb-9682-b6dbfc729cd8/operator/0.log" Feb 17 23:06:40 crc kubenswrapper[4793]: I0217 23:06:40.267091 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jqgbz_b19f5d08-b87f-4168-b29b-b28619987367/perses-operator/0.log" Feb 17 23:06:41 crc kubenswrapper[4793]: I0217 23:06:41.539222 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:06:41 crc kubenswrapper[4793]: E0217 23:06:41.539896 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:06:52 crc kubenswrapper[4793]: I0217 23:06:52.538029 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:06:52 crc kubenswrapper[4793]: E0217 23:06:52.538818 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:06:54 crc kubenswrapper[4793]: I0217 23:06:54.539583 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:06:54 crc kubenswrapper[4793]: E0217 23:06:54.540113 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:07:03 crc kubenswrapper[4793]: I0217 23:07:03.541552 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:07:03 crc kubenswrapper[4793]: E0217 23:07:03.542477 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:07:09 crc kubenswrapper[4793]: I0217 23:07:09.539815 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:07:09 crc kubenswrapper[4793]: E0217 23:07:09.540784 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:07:17 crc kubenswrapper[4793]: I0217 23:07:17.539661 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:07:17 crc kubenswrapper[4793]: E0217 23:07:17.540601 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:07:23 crc kubenswrapper[4793]: I0217 23:07:23.539736 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:07:23 crc kubenswrapper[4793]: E0217 23:07:23.541999 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:07:30 crc kubenswrapper[4793]: I0217 23:07:30.538764 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:07:30 crc kubenswrapper[4793]: E0217 23:07:30.539815 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:07:34 crc kubenswrapper[4793]: I0217 23:07:34.538493 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:07:34 crc kubenswrapper[4793]: E0217 23:07:34.539263 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:07:42 crc kubenswrapper[4793]: I0217 23:07:42.538672 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:07:42 crc kubenswrapper[4793]: E0217 23:07:42.539559 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:07:46 crc kubenswrapper[4793]: I0217 23:07:46.539539 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:07:46 crc kubenswrapper[4793]: E0217 23:07:46.540424 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:07:54 crc kubenswrapper[4793]: I0217 23:07:54.538760 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:07:54 crc kubenswrapper[4793]: E0217 23:07:54.540219 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:08:00 crc kubenswrapper[4793]: I0217 23:08:00.539564 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:08:00 crc kubenswrapper[4793]: E0217 23:08:00.542898 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:08:08 crc kubenswrapper[4793]: I0217 23:08:08.540732 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:08:08 crc kubenswrapper[4793]: E0217 23:08:08.541899 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:08:14 crc kubenswrapper[4793]: I0217 23:08:14.540386 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:08:14 crc kubenswrapper[4793]: E0217 23:08:14.541352 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwtf_openshift-machine-config-operator(7a786034-a3c6-4693-965a-3bd39bce6caa)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" Feb 17 23:08:22 crc kubenswrapper[4793]: I0217 23:08:22.538446 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:08:22 crc kubenswrapper[4793]: E0217 23:08:22.539408 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:08:25 crc kubenswrapper[4793]: I0217 23:08:25.550872 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:08:26 crc kubenswrapper[4793]: I0217 23:08:26.944818 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"b39a403d2f16ad3d40f0d968ed076d2f967c5fe3da689fd3dd7aaf30f585d56f"} Feb 17 23:08:36 crc kubenswrapper[4793]: I0217 23:08:36.539361 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:08:37 crc kubenswrapper[4793]: I0217 23:08:37.110869 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f"} Feb 17 23:08:40 crc kubenswrapper[4793]: I0217 23:08:40.168274 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" exitCode=1 Feb 17 23:08:40 crc kubenswrapper[4793]: I0217 23:08:40.168384 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f"} Feb 17 23:08:40 crc kubenswrapper[4793]: I0217 23:08:40.168575 4793 scope.go:117] "RemoveContainer" containerID="9819a56ecac46a2a633366dfd6640f93cfad701dadeca2da9df3c2e2b331a094" Feb 17 23:08:40 crc kubenswrapper[4793]: I0217 23:08:40.169516 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:08:40 crc kubenswrapper[4793]: E0217 23:08:40.170127 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:08:40 crc kubenswrapper[4793]: I0217 23:08:40.596276 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 23:08:41 crc kubenswrapper[4793]: I0217 23:08:41.184274 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:08:41 crc kubenswrapper[4793]: E0217 23:08:41.184743 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:08:43 crc kubenswrapper[4793]: I0217 23:08:43.209535 4793 generic.go:334] "Generic (PLEG): container finished" podID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerID="9436ca5910d6439b47060339b64cdc427709a079e521bb9fb3f67e14855f1db3" exitCode=0 Feb 17 23:08:43 crc kubenswrapper[4793]: I0217 23:08:43.209718 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fhhnm/must-gather-kvljh" event={"ID":"a50b28bc-a207-47df-9c98-e0552834dd8d","Type":"ContainerDied","Data":"9436ca5910d6439b47060339b64cdc427709a079e521bb9fb3f67e14855f1db3"} Feb 17 23:08:43 crc kubenswrapper[4793]: I0217 23:08:43.210942 4793 scope.go:117] "RemoveContainer" containerID="9436ca5910d6439b47060339b64cdc427709a079e521bb9fb3f67e14855f1db3" Feb 17 23:08:43 crc kubenswrapper[4793]: I0217 23:08:43.952587 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fhhnm_must-gather-kvljh_a50b28bc-a207-47df-9c98-e0552834dd8d/gather/0.log" Feb 17 23:08:45 crc kubenswrapper[4793]: I0217 23:08:45.596453 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:08:45 crc kubenswrapper[4793]: I0217 23:08:45.596943 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:08:45 crc kubenswrapper[4793]: I0217 23:08:45.596956 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:08:45 crc kubenswrapper[4793]: I0217 23:08:45.597746 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:08:45 crc kubenswrapper[4793]: E0217 23:08:45.598020 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:08:52 crc kubenswrapper[4793]: I0217 23:08:52.874255 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fhhnm/must-gather-kvljh"] Feb 17 23:08:52 crc kubenswrapper[4793]: I0217 23:08:52.875200 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fhhnm/must-gather-kvljh" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="copy" containerID="cri-o://e8f18f2ba8438d993c9084b6ac8c60c83cc99cac35f9e25e1828a4276b35c6f1" gracePeriod=2 Feb 17 23:08:52 crc kubenswrapper[4793]: I0217 23:08:52.887457 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fhhnm/must-gather-kvljh"] Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.354760 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fhhnm_must-gather-kvljh_a50b28bc-a207-47df-9c98-e0552834dd8d/copy/0.log" Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.355642 4793 generic.go:334] "Generic (PLEG): container finished" podID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerID="e8f18f2ba8438d993c9084b6ac8c60c83cc99cac35f9e25e1828a4276b35c6f1" exitCode=143 Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.455340 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fhhnm_must-gather-kvljh_a50b28bc-a207-47df-9c98-e0552834dd8d/copy/0.log" Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.455708 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.536338 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a50b28bc-a207-47df-9c98-e0552834dd8d-must-gather-output\") pod \"a50b28bc-a207-47df-9c98-e0552834dd8d\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.536915 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8g5j\" (UniqueName: \"kubernetes.io/projected/a50b28bc-a207-47df-9c98-e0552834dd8d-kube-api-access-g8g5j\") pod \"a50b28bc-a207-47df-9c98-e0552834dd8d\" (UID: \"a50b28bc-a207-47df-9c98-e0552834dd8d\") " Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.547895 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50b28bc-a207-47df-9c98-e0552834dd8d-kube-api-access-g8g5j" (OuterVolumeSpecName: "kube-api-access-g8g5j") pod "a50b28bc-a207-47df-9c98-e0552834dd8d" (UID: "a50b28bc-a207-47df-9c98-e0552834dd8d"). InnerVolumeSpecName "kube-api-access-g8g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.646001 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8g5j\" (UniqueName: \"kubernetes.io/projected/a50b28bc-a207-47df-9c98-e0552834dd8d-kube-api-access-g8g5j\") on node \"crc\" DevicePath \"\"" Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.796125 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a50b28bc-a207-47df-9c98-e0552834dd8d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a50b28bc-a207-47df-9c98-e0552834dd8d" (UID: "a50b28bc-a207-47df-9c98-e0552834dd8d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:08:53 crc kubenswrapper[4793]: I0217 23:08:53.849795 4793 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a50b28bc-a207-47df-9c98-e0552834dd8d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 23:08:54 crc kubenswrapper[4793]: I0217 23:08:54.366402 4793 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fhhnm_must-gather-kvljh_a50b28bc-a207-47df-9c98-e0552834dd8d/copy/0.log" Feb 17 23:08:54 crc kubenswrapper[4793]: I0217 23:08:54.366988 4793 scope.go:117] "RemoveContainer" containerID="e8f18f2ba8438d993c9084b6ac8c60c83cc99cac35f9e25e1828a4276b35c6f1" Feb 17 23:08:54 crc kubenswrapper[4793]: I0217 23:08:54.367055 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fhhnm/must-gather-kvljh" Feb 17 23:08:54 crc kubenswrapper[4793]: I0217 23:08:54.384517 4793 scope.go:117] "RemoveContainer" containerID="9436ca5910d6439b47060339b64cdc427709a079e521bb9fb3f67e14855f1db3" Feb 17 23:08:55 crc kubenswrapper[4793]: I0217 23:08:55.562177 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" path="/var/lib/kubelet/pods/a50b28bc-a207-47df-9c98-e0552834dd8d/volumes" Feb 17 23:08:57 crc kubenswrapper[4793]: I0217 23:08:57.539041 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:08:57 crc kubenswrapper[4793]: E0217 23:08:57.539599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:09:08 crc kubenswrapper[4793]: I0217 23:09:08.540043 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:09:08 crc kubenswrapper[4793]: E0217 23:09:08.541099 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:09:23 crc kubenswrapper[4793]: I0217 23:09:23.539587 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:09:23 crc kubenswrapper[4793]: E0217 23:09:23.540800 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:09:28 crc kubenswrapper[4793]: I0217 23:09:28.497344 4793 scope.go:117] "RemoveContainer" containerID="0af078befb9201fd8e9a6c218b1f5ea85be8044610c6a17d6d9bf9da1f6a9868" Feb 17 23:09:36 crc kubenswrapper[4793]: I0217 23:09:36.539406 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:09:36 crc kubenswrapper[4793]: E0217 23:09:36.540272 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:09:48 crc kubenswrapper[4793]: I0217 23:09:48.540247 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:09:48 crc kubenswrapper[4793]: E0217 23:09:48.543228 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:09:48 crc kubenswrapper[4793]: I0217 23:09:48.782602 4793 trace.go:236] Trace[2070534296]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (17-Feb-2026 23:09:47.684) (total time: 1094ms): Feb 17 23:09:48 crc kubenswrapper[4793]: Trace[2070534296]: [1.09484069s] [1.09484069s] END Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.344728 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r2g2h"] Feb 17 23:09:54 crc kubenswrapper[4793]: E0217 23:09:54.346130 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="extract-content" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346171 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="extract-content" Feb 17 23:09:54 crc kubenswrapper[4793]: E0217 23:09:54.346220 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="gather" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346229 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="gather" Feb 17 23:09:54 crc kubenswrapper[4793]: E0217 23:09:54.346245 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346253 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" Feb 17 23:09:54 crc kubenswrapper[4793]: E0217 23:09:54.346272 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="extract-utilities" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346280 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="extract-utilities" Feb 17 23:09:54 crc kubenswrapper[4793]: E0217 23:09:54.346295 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="copy" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346304 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="copy" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346567 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef1293c-b679-4bf8-851a-fa5a95e40d5e" containerName="registry-server" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346591 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="copy" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.346605 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50b28bc-a207-47df-9c98-e0552834dd8d" containerName="gather" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.348920 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.383160 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2g2h"] Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.510185 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-utilities\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.510464 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swzw2\" (UniqueName: \"kubernetes.io/projected/9ace0d2a-b04e-4489-a762-625aafa9f3a9-kube-api-access-swzw2\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.510565 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-catalog-content\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.613563 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-utilities\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.613626 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swzw2\" (UniqueName: \"kubernetes.io/projected/9ace0d2a-b04e-4489-a762-625aafa9f3a9-kube-api-access-swzw2\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.613668 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-catalog-content\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.614069 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-utilities\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.614230 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-catalog-content\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.639157 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swzw2\" (UniqueName: \"kubernetes.io/projected/9ace0d2a-b04e-4489-a762-625aafa9f3a9-kube-api-access-swzw2\") pod \"certified-operators-r2g2h\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:54 crc kubenswrapper[4793]: I0217 23:09:54.671751 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:09:55 crc kubenswrapper[4793]: I0217 23:09:55.321736 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2g2h"] Feb 17 23:09:56 crc kubenswrapper[4793]: I0217 23:09:56.100447 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerID="4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56" exitCode=0 Feb 17 23:09:56 crc kubenswrapper[4793]: I0217 23:09:56.100545 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerDied","Data":"4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56"} Feb 17 23:09:56 crc kubenswrapper[4793]: I0217 23:09:56.100820 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerStarted","Data":"604307c7714d960fe1080717915982cf67c60b9dd95b0a626e3a0f01f6e8c0d3"} Feb 17 23:09:56 crc kubenswrapper[4793]: I0217 23:09:56.106132 4793 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 23:09:57 crc kubenswrapper[4793]: I0217 23:09:57.118142 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerStarted","Data":"4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718"} Feb 17 23:09:58 crc kubenswrapper[4793]: I0217 23:09:58.129224 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerID="4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718" exitCode=0 Feb 17 23:09:58 crc kubenswrapper[4793]: I0217 23:09:58.129331 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerDied","Data":"4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718"} Feb 17 23:09:59 crc kubenswrapper[4793]: I0217 23:09:59.141516 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerStarted","Data":"43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87"} Feb 17 23:09:59 crc kubenswrapper[4793]: I0217 23:09:59.170505 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r2g2h" podStartSLOduration=2.7708778069999997 podStartE2EDuration="5.170485188s" podCreationTimestamp="2026-02-17 23:09:54 +0000 UTC" firstStartedPulling="2026-02-17 23:09:56.105393273 +0000 UTC m=+10871.397091614" lastFinishedPulling="2026-02-17 23:09:58.505000694 +0000 UTC m=+10873.796698995" observedRunningTime="2026-02-17 23:09:59.165575227 +0000 UTC m=+10874.457273568" watchObservedRunningTime="2026-02-17 23:09:59.170485188 +0000 UTC m=+10874.462183509" Feb 17 23:10:00 crc kubenswrapper[4793]: I0217 23:10:00.539807 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:10:00 crc kubenswrapper[4793]: E0217 23:10:00.540573 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:10:04 crc kubenswrapper[4793]: I0217 23:10:04.672749 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:10:04 crc kubenswrapper[4793]: I0217 23:10:04.673272 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:10:04 crc kubenswrapper[4793]: I0217 23:10:04.740254 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:10:05 crc kubenswrapper[4793]: I0217 23:10:05.299187 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:10:05 crc kubenswrapper[4793]: I0217 23:10:05.379752 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2g2h"] Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.236175 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r2g2h" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="registry-server" containerID="cri-o://43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87" gracePeriod=2 Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.701420 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.783606 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-utilities\") pod \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.783817 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-catalog-content\") pod \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.783850 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swzw2\" (UniqueName: \"kubernetes.io/projected/9ace0d2a-b04e-4489-a762-625aafa9f3a9-kube-api-access-swzw2\") pod \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\" (UID: \"9ace0d2a-b04e-4489-a762-625aafa9f3a9\") " Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.786334 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-utilities" (OuterVolumeSpecName: "utilities") pod "9ace0d2a-b04e-4489-a762-625aafa9f3a9" (UID: "9ace0d2a-b04e-4489-a762-625aafa9f3a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.800020 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ace0d2a-b04e-4489-a762-625aafa9f3a9-kube-api-access-swzw2" (OuterVolumeSpecName: "kube-api-access-swzw2") pod "9ace0d2a-b04e-4489-a762-625aafa9f3a9" (UID: "9ace0d2a-b04e-4489-a762-625aafa9f3a9"). InnerVolumeSpecName "kube-api-access-swzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.878800 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ace0d2a-b04e-4489-a762-625aafa9f3a9" (UID: "9ace0d2a-b04e-4489-a762-625aafa9f3a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.886397 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.886429 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swzw2\" (UniqueName: \"kubernetes.io/projected/9ace0d2a-b04e-4489-a762-625aafa9f3a9-kube-api-access-swzw2\") on node \"crc\" DevicePath \"\"" Feb 17 23:10:07 crc kubenswrapper[4793]: I0217 23:10:07.886440 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ace0d2a-b04e-4489-a762-625aafa9f3a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.252320 4793 generic.go:334] "Generic (PLEG): container finished" podID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerID="43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87" exitCode=0 Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.252395 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerDied","Data":"43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87"} Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.252437 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2g2h" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.253824 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2g2h" event={"ID":"9ace0d2a-b04e-4489-a762-625aafa9f3a9","Type":"ContainerDied","Data":"604307c7714d960fe1080717915982cf67c60b9dd95b0a626e3a0f01f6e8c0d3"} Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.253855 4793 scope.go:117] "RemoveContainer" containerID="43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.289524 4793 scope.go:117] "RemoveContainer" containerID="4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.324376 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2g2h"] Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.340842 4793 scope.go:117] "RemoveContainer" containerID="4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.346546 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r2g2h"] Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.393043 4793 scope.go:117] "RemoveContainer" containerID="43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87" Feb 17 23:10:08 crc kubenswrapper[4793]: E0217 23:10:08.393558 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87\": container with ID starting with 43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87 not found: ID does not exist" containerID="43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.393650 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87"} err="failed to get container status \"43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87\": rpc error: code = NotFound desc = could not find container \"43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87\": container with ID starting with 43e57801e6fb84f23e8e043ba2ea4eb6462d11dd51f24d36f906d6b0d292dc87 not found: ID does not exist" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.393748 4793 scope.go:117] "RemoveContainer" containerID="4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718" Feb 17 23:10:08 crc kubenswrapper[4793]: E0217 23:10:08.394183 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718\": container with ID starting with 4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718 not found: ID does not exist" containerID="4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.394238 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718"} err="failed to get container status \"4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718\": rpc error: code = NotFound desc = could not find container \"4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718\": container with ID starting with 4947f22b0b1daba562e4c7823f7aeaec0e2bcfed3c30a5fec8582b18187d8718 not found: ID does not exist" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.394279 4793 scope.go:117] "RemoveContainer" containerID="4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56" Feb 17 23:10:08 crc kubenswrapper[4793]: E0217 23:10:08.394564 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56\": container with ID starting with 4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56 not found: ID does not exist" containerID="4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56" Feb 17 23:10:08 crc kubenswrapper[4793]: I0217 23:10:08.394615 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56"} err="failed to get container status \"4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56\": rpc error: code = NotFound desc = could not find container \"4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56\": container with ID starting with 4c1ebc78fd9044c42ea8089a2ac55bab239f6143062d8b7e924f160429300b56 not found: ID does not exist" Feb 17 23:10:09 crc kubenswrapper[4793]: I0217 23:10:09.563875 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" path="/var/lib/kubelet/pods/9ace0d2a-b04e-4489-a762-625aafa9f3a9/volumes" Feb 17 23:10:15 crc kubenswrapper[4793]: I0217 23:10:15.548253 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:10:15 crc kubenswrapper[4793]: E0217 23:10:15.550368 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:10:29 crc kubenswrapper[4793]: I0217 23:10:29.539353 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:10:29 crc kubenswrapper[4793]: E0217 23:10:29.552376 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:10:44 crc kubenswrapper[4793]: I0217 23:10:44.538912 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:10:44 crc kubenswrapper[4793]: E0217 23:10:44.539744 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:10:50 crc kubenswrapper[4793]: I0217 23:10:50.102267 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:10:50 crc kubenswrapper[4793]: I0217 23:10:50.103801 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:10:55 crc kubenswrapper[4793]: I0217 23:10:55.547311 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:10:55 crc kubenswrapper[4793]: E0217 23:10:55.548133 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:11:07 crc kubenswrapper[4793]: I0217 23:11:07.538531 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:11:07 crc kubenswrapper[4793]: E0217 23:11:07.539436 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:11:18 crc kubenswrapper[4793]: I0217 23:11:18.539997 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:11:18 crc kubenswrapper[4793]: E0217 23:11:18.541101 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:11:20 crc kubenswrapper[4793]: I0217 23:11:20.101729 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:11:20 crc kubenswrapper[4793]: I0217 23:11:20.102291 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:11:33 crc kubenswrapper[4793]: I0217 23:11:33.539349 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:11:33 crc kubenswrapper[4793]: E0217 23:11:33.540458 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.234795 4793 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z4shj"] Feb 17 23:11:45 crc kubenswrapper[4793]: E0217 23:11:45.242623 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="extract-utilities" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.242976 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="extract-utilities" Feb 17 23:11:45 crc kubenswrapper[4793]: E0217 23:11:45.243172 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="registry-server" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.243340 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="registry-server" Feb 17 23:11:45 crc kubenswrapper[4793]: E0217 23:11:45.243560 4793 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="extract-content" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.243754 4793 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="extract-content" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.244449 4793 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ace0d2a-b04e-4489-a762-625aafa9f3a9" containerName="registry-server" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.248146 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.264296 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4shj"] Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.340083 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-catalog-content\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.340169 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-utilities\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.340469 4793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdk5\" (UniqueName: \"kubernetes.io/projected/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-kube-api-access-qqdk5\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.445128 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqdk5\" (UniqueName: \"kubernetes.io/projected/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-kube-api-access-qqdk5\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.445805 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-catalog-content\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.446013 4793 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-utilities\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.446365 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-catalog-content\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.446573 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-utilities\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.466415 4793 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqdk5\" (UniqueName: \"kubernetes.io/projected/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-kube-api-access-qqdk5\") pod \"community-operators-z4shj\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.553465 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:11:45 crc kubenswrapper[4793]: E0217 23:11:45.554067 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:11:45 crc kubenswrapper[4793]: I0217 23:11:45.583166 4793 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:46 crc kubenswrapper[4793]: I0217 23:11:46.157387 4793 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4shj"] Feb 17 23:11:46 crc kubenswrapper[4793]: I0217 23:11:46.444321 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerStarted","Data":"816bced3895c4b799cad67d94f4b351809be2f733e8fdeb60eefd10a6b3a4b81"} Feb 17 23:11:47 crc kubenswrapper[4793]: I0217 23:11:47.458746 4793 generic.go:334] "Generic (PLEG): container finished" podID="f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" containerID="7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6" exitCode=0 Feb 17 23:11:47 crc kubenswrapper[4793]: I0217 23:11:47.458894 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerDied","Data":"7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6"} Feb 17 23:11:49 crc kubenswrapper[4793]: I0217 23:11:49.493358 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerStarted","Data":"d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f"} Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.101916 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.102332 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.102411 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.103606 4793 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b39a403d2f16ad3d40f0d968ed076d2f967c5fe3da689fd3dd7aaf30f585d56f"} pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.103791 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" containerID="cri-o://b39a403d2f16ad3d40f0d968ed076d2f967c5fe3da689fd3dd7aaf30f585d56f" gracePeriod=600 Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.511723 4793 generic.go:334] "Generic (PLEG): container finished" podID="f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" containerID="d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f" exitCode=0 Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.511811 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerDied","Data":"d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f"} Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.521760 4793 generic.go:334] "Generic (PLEG): container finished" podID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerID="b39a403d2f16ad3d40f0d968ed076d2f967c5fe3da689fd3dd7aaf30f585d56f" exitCode=0 Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.521821 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerDied","Data":"b39a403d2f16ad3d40f0d968ed076d2f967c5fe3da689fd3dd7aaf30f585d56f"} Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.521861 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" event={"ID":"7a786034-a3c6-4693-965a-3bd39bce6caa","Type":"ContainerStarted","Data":"9b6d253b72a5fb87d1a1780b4434d650ab4373f34f894a21653f3c92ed444602"} Feb 17 23:11:50 crc kubenswrapper[4793]: I0217 23:11:50.521894 4793 scope.go:117] "RemoveContainer" containerID="5ea010f6c87e97e2c1ae3eea129cd7fb8ee6f4c832f3d99af7d3bcf29b729eea" Feb 17 23:11:52 crc kubenswrapper[4793]: I0217 23:11:52.549956 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerStarted","Data":"e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d"} Feb 17 23:11:52 crc kubenswrapper[4793]: I0217 23:11:52.580451 4793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z4shj" podStartSLOduration=3.344225921 podStartE2EDuration="7.580429567s" podCreationTimestamp="2026-02-17 23:11:45 +0000 UTC" firstStartedPulling="2026-02-17 23:11:47.463183177 +0000 UTC m=+10982.754881518" lastFinishedPulling="2026-02-17 23:11:51.699386823 +0000 UTC m=+10986.991085164" observedRunningTime="2026-02-17 23:11:52.572784868 +0000 UTC m=+10987.864483199" watchObservedRunningTime="2026-02-17 23:11:52.580429567 +0000 UTC m=+10987.872127888" Feb 17 23:11:55 crc kubenswrapper[4793]: I0217 23:11:55.583858 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:55 crc kubenswrapper[4793]: I0217 23:11:55.584435 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:55 crc kubenswrapper[4793]: I0217 23:11:55.672286 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:11:56 crc kubenswrapper[4793]: I0217 23:11:56.539634 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:11:56 crc kubenswrapper[4793]: E0217 23:11:56.540112 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:12:05 crc kubenswrapper[4793]: I0217 23:12:05.666677 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:12:05 crc kubenswrapper[4793]: I0217 23:12:05.741085 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4shj"] Feb 17 23:12:05 crc kubenswrapper[4793]: I0217 23:12:05.741590 4793 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z4shj" podUID="f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" containerName="registry-server" containerID="cri-o://e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d" gracePeriod=2 Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.206454 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.324201 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-catalog-content\") pod \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.324269 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-utilities\") pod \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.324345 4793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqdk5\" (UniqueName: \"kubernetes.io/projected/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-kube-api-access-qqdk5\") pod \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\" (UID: \"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f\") " Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.325237 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-utilities" (OuterVolumeSpecName: "utilities") pod "f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" (UID: "f5c15a7f-2a4f-4188-b3bb-b47c6be2584f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.335349 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-kube-api-access-qqdk5" (OuterVolumeSpecName: "kube-api-access-qqdk5") pod "f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" (UID: "f5c15a7f-2a4f-4188-b3bb-b47c6be2584f"). InnerVolumeSpecName "kube-api-access-qqdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.392810 4793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" (UID: "f5c15a7f-2a4f-4188-b3bb-b47c6be2584f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.426467 4793 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.426860 4793 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.426876 4793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqdk5\" (UniqueName: \"kubernetes.io/projected/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f-kube-api-access-qqdk5\") on node \"crc\" DevicePath \"\"" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.743981 4793 generic.go:334] "Generic (PLEG): container finished" podID="f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" containerID="e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d" exitCode=0 Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.744025 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerDied","Data":"e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d"} Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.744055 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4shj" event={"ID":"f5c15a7f-2a4f-4188-b3bb-b47c6be2584f","Type":"ContainerDied","Data":"816bced3895c4b799cad67d94f4b351809be2f733e8fdeb60eefd10a6b3a4b81"} Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.744081 4793 scope.go:117] "RemoveContainer" containerID="e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.744082 4793 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4shj" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.795934 4793 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4shj"] Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.796400 4793 scope.go:117] "RemoveContainer" containerID="d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.812803 4793 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z4shj"] Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.824117 4793 scope.go:117] "RemoveContainer" containerID="7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.873280 4793 scope.go:117] "RemoveContainer" containerID="e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d" Feb 17 23:12:06 crc kubenswrapper[4793]: E0217 23:12:06.874006 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d\": container with ID starting with e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d not found: ID does not exist" containerID="e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.874139 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d"} err="failed to get container status \"e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d\": rpc error: code = NotFound desc = could not find container \"e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d\": container with ID starting with e999bc29fb06c09a0795289b58a91fb7d42fe4e0c346ac6e03ddbebbf6b58a7d not found: ID does not exist" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.874245 4793 scope.go:117] "RemoveContainer" containerID="d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f" Feb 17 23:12:06 crc kubenswrapper[4793]: E0217 23:12:06.875020 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f\": container with ID starting with d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f not found: ID does not exist" containerID="d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.875137 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f"} err="failed to get container status \"d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f\": rpc error: code = NotFound desc = could not find container \"d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f\": container with ID starting with d02bae233f505c9741e2d6b7aed199370c6eeba63625f7d15bba96697a82cf5f not found: ID does not exist" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.875224 4793 scope.go:117] "RemoveContainer" containerID="7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6" Feb 17 23:12:06 crc kubenswrapper[4793]: E0217 23:12:06.875712 4793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6\": container with ID starting with 7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6 not found: ID does not exist" containerID="7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6" Feb 17 23:12:06 crc kubenswrapper[4793]: I0217 23:12:06.875834 4793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6"} err="failed to get container status \"7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6\": rpc error: code = NotFound desc = could not find container \"7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6\": container with ID starting with 7df97bb31a2a712dedb121e2f95ddd2b01f6700d72a46599169e14090607e2d6 not found: ID does not exist" Feb 17 23:12:07 crc kubenswrapper[4793]: I0217 23:12:07.558627 4793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c15a7f-2a4f-4188-b3bb-b47c6be2584f" path="/var/lib/kubelet/pods/f5c15a7f-2a4f-4188-b3bb-b47c6be2584f/volumes" Feb 17 23:12:09 crc kubenswrapper[4793]: I0217 23:12:09.540799 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:12:09 crc kubenswrapper[4793]: E0217 23:12:09.541884 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:12:23 crc kubenswrapper[4793]: I0217 23:12:23.540339 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:12:23 crc kubenswrapper[4793]: E0217 23:12:23.541323 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:12:38 crc kubenswrapper[4793]: I0217 23:12:38.539402 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:12:38 crc kubenswrapper[4793]: E0217 23:12:38.540465 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:12:53 crc kubenswrapper[4793]: I0217 23:12:53.539465 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:12:53 crc kubenswrapper[4793]: E0217 23:12:53.540599 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:13:04 crc kubenswrapper[4793]: I0217 23:13:04.539951 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:13:04 crc kubenswrapper[4793]: E0217 23:13:04.541331 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:13:19 crc kubenswrapper[4793]: I0217 23:13:19.539211 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:13:19 crc kubenswrapper[4793]: E0217 23:13:19.540201 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:13:34 crc kubenswrapper[4793]: I0217 23:13:34.538868 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:13:34 crc kubenswrapper[4793]: E0217 23:13:34.539765 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:13:48 crc kubenswrapper[4793]: I0217 23:13:48.538452 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:13:48 crc kubenswrapper[4793]: I0217 23:13:48.933902 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerStarted","Data":"07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a"} Feb 17 23:13:50 crc kubenswrapper[4793]: I0217 23:13:50.102339 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:13:50 crc kubenswrapper[4793]: I0217 23:13:50.102656 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:13:50 crc kubenswrapper[4793]: I0217 23:13:50.596406 4793 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 17 23:13:51 crc kubenswrapper[4793]: I0217 23:13:51.965478 4793 generic.go:334] "Generic (PLEG): container finished" podID="02d26164-0fa4-4020-9224-b7760a490987" containerID="07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a" exitCode=1 Feb 17 23:13:51 crc kubenswrapper[4793]: I0217 23:13:51.965568 4793 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"02d26164-0fa4-4020-9224-b7760a490987","Type":"ContainerDied","Data":"07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a"} Feb 17 23:13:51 crc kubenswrapper[4793]: I0217 23:13:51.966347 4793 scope.go:117] "RemoveContainer" containerID="3e84f61806af773afbf68bf10d280d13b0a640359eaac6bf8920919166c5781f" Feb 17 23:13:51 crc kubenswrapper[4793]: I0217 23:13:51.967048 4793 scope.go:117] "RemoveContainer" containerID="07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a" Feb 17 23:13:51 crc kubenswrapper[4793]: E0217 23:13:51.967294 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:13:55 crc kubenswrapper[4793]: I0217 23:13:55.596143 4793 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:13:55 crc kubenswrapper[4793]: I0217 23:13:55.599059 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:13:55 crc kubenswrapper[4793]: I0217 23:13:55.599228 4793 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 17 23:13:55 crc kubenswrapper[4793]: I0217 23:13:55.600821 4793 scope.go:117] "RemoveContainer" containerID="07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a" Feb 17 23:13:55 crc kubenswrapper[4793]: E0217 23:13:55.601467 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:14:08 crc kubenswrapper[4793]: I0217 23:14:08.538375 4793 scope.go:117] "RemoveContainer" containerID="07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a" Feb 17 23:14:08 crc kubenswrapper[4793]: E0217 23:14:08.539140 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:14:20 crc kubenswrapper[4793]: I0217 23:14:20.102298 4793 patch_prober.go:28] interesting pod/machine-config-daemon-jnwtf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 23:14:20 crc kubenswrapper[4793]: I0217 23:14:20.103098 4793 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwtf" podUID="7a786034-a3c6-4693-965a-3bd39bce6caa" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 23:14:21 crc kubenswrapper[4793]: I0217 23:14:21.539293 4793 scope.go:117] "RemoveContainer" containerID="07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a" Feb 17 23:14:21 crc kubenswrapper[4793]: E0217 23:14:21.539805 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" Feb 17 23:14:32 crc kubenswrapper[4793]: I0217 23:14:32.539377 4793 scope.go:117] "RemoveContainer" containerID="07a03c005fba14b1e7ce28294eac21973923e8ded1fa55cf58d391590824903a" Feb 17 23:14:32 crc kubenswrapper[4793]: E0217 23:14:32.540421 4793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=watcher-applier pod=watcher-applier-0_openstack(02d26164-0fa4-4020-9224-b7760a490987)\"" pod="openstack/watcher-applier-0" podUID="02d26164-0fa4-4020-9224-b7760a490987" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145173144024452 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145173145017370 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145144754016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145144755015470 5ustar corecore